Primer for Libvirt

Chris Coveyduck

If you, like me, prefer to work directly with libvirt instead of using existing wrappers such as lxc, this quick guide will assist you in setting up a VM swiftly while acquainting you with some of the fundamental steps.

Virsh

Virsh is the primary command-line interface for managing guest domains. It allows you to create, update, and delete everything from networks to guest VMs. It also features a convenient auto-complete function—simply type 'virsh' and press the tab key twice to begin exploring.

Creating, Update and Removing domains

Domains, such as guests and networks, can be swiftly created using virsh. It's important to note that for guest VMs, this process is distinct from installation, which I will discuss later in the context of virt-install.

In my experience, libvirt handles all configurations through XML files. There are various examples available, and I have provided one for a network in my series on working with OVN/OVS.

At a minimum, you will need to create and configure a network bridge for the guest VMs. While we utilised OVS in my series, beginners might want to start by defining a network bridge using the 'linux native bridge' as shown below.

# create an XML file
nano net_default.xml

# insert the network definition (assumes br0 as the name of an existing bridge)
<network>
  <name>default</name>
  <forward mode="bridge"/>
  <bridge name="br0"/>
</network>

# define it using virsh
virsh net-define net_default.xml

Domains can be dumped to XML for backup purposes and to enable programmatic configuration updates. Additionally, domains can be directly edited, as demonstrated in the examples below.

# dump my_guest_vm to an xml file
virsh dumpxml my_guest_vm > my_guest_vm.xml

# update (redefine) my_guest_vm from xml
virsh define my_guest_vm.xml

# just directly edit my_guest_vm to make a change (opens using vi)
virsh edit my_guest_vm

Get accustomed to using virsh, which is quite straightforward for executing general tasks.

Installing Guest VMs (domains)

Creating and installing guest VMs is a broad topic, and I will focus on one method that utilizes cloud images available for various distributions. You can also create and package your own, but as this is an introductory guide, that may be too advanced for most readers.

To install a guest VM, the following steps are necessary:

  1. Acquire a suitable cloud image for your distribution, commonly available in qcow2 format.
  2. Create a virtual disk for the new guest VM, using the cloud image as a backing file.
  3. Create and configure a cloud-init ISO image to automate the guest VM setup, such as establishing user credentials for access.
  4. Run virt-install to configure and launch the new VM.

Creating the virtual disk

Creating a new qcow2 disk using the qemu-img command-line is a straightforward process. This disk will serve as the boot device for our VM, eliminating the need to download an ISO of the distribution since it's already included in the backing image we have.

Below are the basic commands to acquire a cloud image of Ubuntu 24.04 LTS and to create a new virtual disk from it. The disk has also been resized from its original size.

# download the image
wget https://cloud-images.ubuntu.com/noble/current/noble-server-cloudimg-amd64.img

# copy it into a sensible (imho) location
cp noble-server-cloudimg-amd64.img /var/lib/libvirt/images

# create a new virtual disk using the cloudimg as the backing store
qemu-img create -f qcow2 -F qcow2 -b /var/lib/libvirt/images/noble-server-cloudimg-amd64.img /var/lib/libvirt/images/noble-vm.qcow2

We now have virtual disk, however let's have a look at it using qemu-img info to see some detail.

# get image info
qemu-img info /var/lib/libvirt/images/noble-vm.qcow2

image: /var/lib/libvirt/images/noble-vm.qcow2
file format: qcow2
virtual size: 3.5 GiB (3758096384 bytes)
disk size: 196 KiB
cluster_size: 65536
backing file: /var/lib/libvirt/images/noble-server-cloudimg-amd64.img
backing file format: qcow2
Format specific information:
    compat: 1.1
    compression type: zlib
    lazy refcounts: false
    refcount bits: 16
    corrupt: false
    extended l2: false
Child node '/file':
    filename: /var/lib/libvirt/images/noble-vm.qcow2
    protocol type: file
    file length: 192 KiB (197120 bytes)
    disk size: 196 KiB

The image is 3.5 GiB in size, which is not excessively large, and it occupies 196 KiB of disk space. The cloud image serving as the backing store is quite modest, requiring only about 560 MiB for the download.

Resizing can be easily accomplished with the qemu-img resize command.

# resize the noble-vm.qcow2 image to 40GB
qemu-img resize /var/lib/libvirt/images/noble-vm.qcow2 40G

# reinspect the noble-vm.qcow2 image
qemu-img info /var/lib/libvirt/images/noble-vm.qcow2

image: /var/lib/libvirt/images/noble-vm.qcow2
file format: qcow2
virtual size: 40 GiB (42949672960 bytes)
disk size: 200 KiB
cluster_size: 65536
backing file: /var/lib/libvirt/images/noble-server-cloudimg-amd64.img
backing file format: qcow2
Format specific information:
    compat: 1.1
    compression type: zlib
    lazy refcounts: false
    refcount bits: 16
    corrupt: false
    extended l2: false
Child node '/file':
    filename: /var/lib/libvirt/images/noble-vm.qcow2
    protocol type: file
    file length: 257 KiB (263168 bytes)
    disk size: 200 KiB

We now have a more sensible image; although it's still thin-provisioned, taking up only 200 KiB, we need to be mindful of managing the physical storage capacity.

Building Cloud Init

Cloud images must be customized during the installation and initialization process. This is achieved by creating an ISO image with the -volid flag set to 'cidata', which includes all the necessary customization settings. At the very least, you'll need to specify a username and a password or SSH key, but you can also fully configure network and service settings.

I won't transform this article into a cloudinit tutorial, so I will simply share a functional copy of the one I utilize in a laboratory setting. The structure of my cloudinit.iso is as follows:

meta-data # empty
network-config # empty most of the time as I just set IPs manually
user-data # see template below

The files must be placed at the root of the ISO image that we are going to create shortly. Below is a functional user-data example from my laboratory.

#cloud-config
users:
    - name: 'adminuser'
      groups: users,adm,dialout,audio,netdev,video,plugdev,cdrom,games,input,gpio,spi,i2c,render,sudo
      shell: /bin/bash
      lock_passwd: true
      ssh_pwauth: false
      ssh_authorized_keys:
        - ''
      sudo: ALL=(ALL) NOPASSWD:ALL

This configuration sets up a new user (adminuser) with private key authentication. The password is disabled, allowing you to elevate to 'su' using sudo without needing a password since it doesn't exist.

You'll need to 'burn' these three files onto an ISO, using a tool like makeisofs and setting the -volid switch to 'cidata'.

# create the ISO for cloudinit
mkisofs -o cloudinit.iso -volid cidata -joliet -rock user-data meta-data network-config

Setting input-charset to 'UTF-8' from locale.
Total translation table size: 0
Total rockridge attributes bytes: 457
Total directory bytes: 0
Path table size(bytes): 10
Max brk space used 0
182 extents written (0 MB)

Something you may want to add to the cloudinit is network configuration or you will need to do all this by hand through a serial console using virsh. The following is a Netplan configuration added into network-config in the cloudinit.

#cloud-config
network:
  version: 2
  ethernets:
    enp1s0:
      dhcp4: no
      dhcp6: no
      addresses:
        - 192.168.14.2/24
        - 2001:1::1/64
      gateway4: 192.168.14.1
      gateway6: 2001:1::2
      nameservers:
        search: [foo.local, bar.local]
        addresses: [8.8.8.8]
      # static routes
      routes:
        - to: 192.0.2.0/24
          via: 192.168.14.254
          metric: 3

Customize this as you prefer, but keep in mind that all VMs installed from this Cloud-init will share the same network configuration. In a practical scenario, you'd likely want to craft a script to automatically generate a Cloud-init ISO for each VM, which is quite straightforward, but that's beyond the scope of this post.

Cloud image installation

With a virtual disk and a valid cloud init you can now execute virt-install which will create the VM according to the parameters we supply and start the installation process. Essentially this will just boot the cloud image and customise it based on out cloudinit.

# create/install the VM
virt-install --name noble-vm --os-variant ubuntunoble --vcpus 2 --memory 2048 \
  		--graphics vnc --virt-type kvm --disk /var/lib/libvirt/images/noble-vm.qcow2 \
  		--cdrom /var/lib/libvirt/images/cloudinit.iso \
  		--network=network=default,model=virtio
        

When you execute the above you can more or less just disconnect by pressing CTRL+C on the keyboard and then opening a serial console using virsh.

# connect to the serial console using virsh
virsh console noble-vm --safe

Wrapping up

And that concludes the basics. Although this is just an introduction, if you're eager to begin, hopefully, you'll find this information useful!

Private CloudHow-Tolibvirt

Comments


Ducksource Newsletter

* indicates required

Please confirm you want to receive emails from us, your information will be transferred to Mailchimp for processing: