Libvirt: Difference between revisions

imported>Winny
No edit summary
Add section "File sharing via virtiofs mount"
 
(37 intermediate revisions by 18 users not shown)
Line 1: Line 1:
{{DISPLAYTITLE:{{#if:{{NAMESPACE}}|{{NAMESPACE}}:|}}{{lcfirst:{{PAGENAME}}}}}}
[https://libvirt.org libvirt] is a toolkit to interact with the virtualization capabilities of recent versions of Linux (and other OSes). It does so by providing a common API to different virtualization backends.
{{expansion|Only a superficial look is provided.}}


libvirt is a toolkit to interact with the virtualization capabilities of recent versions of Linux (and other OSes). It does so by providing a common API to different virtualization backends.
== Setup ==


Using the {{nixos:option|virtualisation.libvirtd}} options, libvirtd can be enabled on a NixOS machine.
Enable libvirt daemon


== Backends ==
{{file|||<nowiki>
virtualisation.libvirtd.enable = true;


=== QEMU/KVM ===
# Enable TPM emulation (optional)
# install pkgs.swtpm system-wide for use in virt-manager (optional)
virtualisation.libvirtd.qemu = {
  swtpm.enable = true;
};
 
# Enable USB redirection (optional)
virtualisation.spiceUSBRedirection.enable = true;
 
</nowiki>|name=/etc/nixos/configuration.nix|lang=nix}}
 
To enable local user access to libvirt, for example by using <code>virt-manager</code> or <code>gnome-boxes</code>, add yourself to the <code>libvirtd</code> group
 
{{file|/etc/nixos/configuration.nix|nix|<nowiki>
users.users.myuser = {
  extraGroups = [ "libvirtd" ];
};
</nowiki>}}


This backend works and is enabled by default. To use <code>virt-manager</code> with your user, locally and via SSH, it will be necessary to add yourself to the <code>libvirtd</code> group.
== Configuration ==
 
=== UEFI with OVMF ===
 
See [https://ostechnix.com/enable-uefi-support-for-kvm-virtual-machines-in-linux/ this tutorial] on how to run a guest machine in UEFI mode using <code>virt-manager</code>.
 
=== Nested virtualization ===


If you would like to enable nested virtualization for your guests to run KVM hypervisors inside them, you should enable it as follows:  {{nixos:option|boot.extraModprobeConfig}}, for example:
If you would like to enable nested virtualization for your guests to run KVM hypervisors inside them, you should enable it as follows:  {{nixos:option|boot.extraModprobeConfig}}, for example:


<syntaxhighlight lang="nix">
{{file|||<nowiki>
boot.extraModprobeConfig = "options kvm_intel nested=1";
boot.extraModprobeConfig = ''
  options kvm_intel nested=1
'';
</nowiki>|name=/etc/nixos/configuration.nix|lang=nix}}
 
=== Networking ===
 
==== Default networking ====
 
Enable and start the default network using the following commands:
 
<syntaxhighlight lang="console">
# virsh net-autostart default
# virsh net-start default
</syntaxhighlight>
 
This will configure the default network to start automatically on boot and immediately activate it. You may need to whitelist the interface for the firewall like so:
 
{{File|3=networking.firewall.trustedInterfaces = [ "virbr0" ];|name=/etc/nixos/configuration.nix|lang=nix}}
 
==== Bridge networking ====
 
Create a XML file called <code>virbr0.xml</code> with the definition of the bridge interface.
 
<syntaxhighlight lang="bash">
<network>
  <name>virbr0</name>
  <forward mode='bridge'/>
  <bridge name='virbr0'/>
</network>
</syntaxhighlight>
 
Add and enable bridge interface.
 
<syntaxhighlight lang="bash">
virsh net-define virbr0.xml
virsh net-start virbr0
ip link add virbr0 type bridge
ip address ad dev virbr0 10.25.0.1/24
ip link set dev virbr0 up
</syntaxhighlight>
 
Edit the libvirt guest <code>my_guest</code> XML file and add the bridge interface to it.
 
<syntaxhighlight lang="bash">
virsh edit my_guest
</syntaxhighlight>
 
Add:
 
<syntaxhighlight lang="bash">
  <devices>
    [...]
    <interface type='bridge'>
      <mac address='52:54:00:12:34:56'/>
      <source bridge='virbr0'/>
      <model type='virtio'/>
      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
    </interface>
    [...]
  </devices>
</syntaxhighlight>
 
Inside the guest configure networking for the interface <code>enp1s0</code> (name may differ).
 
{{file|/etc/nixos/configuration.nix|nix|<nowiki>
networking.interfaces.enp1s0 = {
  ipv4.addresses = [{
    address = "10.25.0.2";
    prefixLength = 24;
  }];
  defaultGateway = {
    address = "10.25.0.1";
    interface = "ens1s0";
  };
};
</nowiki>}}
 
The host should now be able to reach the guest via the bridge interface and vice versa.
 
=== File sharing via virtiofs mount ===
One of the best ways to share a host directory with the guest OS is with [https://virtio-fs.gitlab.io/ virtiofs]. On the host system, install the <code>virtiofsd</code> package:<syntaxhighlight lang="nix">
environment.systemPackages = with pkgs; [
  guestfs-tools
  virtiofsd
];
</syntaxhighlight>Next, a few sections of the XML must be edited, which can be done manually or via virt-manager in the guest configuration GUI. If using virt-manager, first navigate on the toolbar to Edit > Preferences > General, and click "Enable XML Editing". Next, open the virtual machine and under the hardware configuration, navigate to Memory and check the box "Enable shared memory". This will add an "access" block to the XML for you, similar to this:<syntaxhighlight lang="xml">
<memory unit="KiB">1638400</memory>
<currentMemory unit="KiB">1638400</currentMemory>
<memoryBacking>
  <source type="memfd"/>
  <access mode="shared"/>
</memoryBacking>
</syntaxhighlight>While still in the hardware configuration, click "Add Hardware" and select "Filesystem". For driver, select "virtiofs". For source path, input the folder on the host machine you wish to share, no trailing slash. For target path, don't put a path but instead a tag/label that is easily identifiable. It will be used in the mount options in the guest OS setup shortly. Once done, you should have a new Filesystem device configuration similar to this:<syntaxhighlight lang="xml">
<filesystem type="mount" accessmode="passthrough">
  <driver type="virtiofs"/>
  <binary path="/run/current-system/sw/bin/virtiofsd"/>
  <source dir="/media"/>
  <target dir="my_host_media_share"/>
  <alias name="fs0"/>
  <address type="pci" domain="0x0000" bus="0x07" slot="0x00" function="0x0"/>
</filesystem>
</syntaxhighlight>If your guest system is using NixOS, you can boot the system and add the new filesystem entry to auto-mount on boot and you're done:<syntaxhighlight lang="nix">
fileSystems."/media" = {
  device = "my_host_media_share";
  fsType = "virtiofs";
};
</syntaxhighlight>If the system fails to fully reboot after applying the changes, ensure the filesystem device matches the "Target path" in your XML exactly.
 
==== Error starting domain: internal error: Child process (/run/current-system/sw/bin/virtiofsd --print-capabilities) unexpected exit status 127: libvirt:  error : cannot execute binary /run/current-system/sw/bin/virtiofsd: No such file or directory ====
This error means virtiofsd was not installed on the host system. Ensure the system package was installed before making changes in virt-manager.
 
==== Error starting domain: operation failed: Unable to find a satisfying virtiofsd ====
The virtiofsd binary path needs to be specified in the filesystem configuration. virt-manager doesn't add this by default and instead assumes a default path that doesn't exist under NixOS. Open the guest machine's hardware details page, click on the passthrough filesystem created earlier, open the XML tab and inside the `<filesystem>...</filesystem>` add the following element to tell virtio where to find the virtiofsd binary:<syntaxhighlight lang="xml">
<binary path="/run/current-system/sw/bin/virtiofsd"/>
</syntaxhighlight>
 
=== File sharing via WebDAV ===
 
Another recommended way to share files between host and guest is to use <code>spice-webdavd</code>.
 
Shutdown the client, in this example named <code>my_guest</code>, and edit the libvirt XML file.
 
<syntaxhighlight lang="bash">
virsh edit my_guest
</syntaxhighlight>
 
Add the following snippet after <code><channel type='unix'>[...]</channel></code> part inside the devices subsection:
 
<syntaxhighlight lang="bash">
    <channel type='spiceport'>
      <source channel='org.spice-space.webdav.0'/>
      <target type='virtio' name='org.spice-space.webdav.0'/>
      <address type='virtio-serial' controller='0' bus='0' port='3'/>
    </channel>
</syntaxhighlight>
</syntaxhighlight>


=== QEMU/KVM User session ===
Start the guest machine. Inside the guest, add following part to your system configuration and apply it.


==== Enable UEFI with OVMF ====
{{file|/etc/nixos/configuration.nix|nix|<nowiki>
services.spice-webdavd.enable = true;
</nowiki>}}


Add the following line to <code>$XDG_CONFIG_HOME/libvirt/qemu.conf</code>:
List available shares for the guest.
<syntaxhighlight>
 
# Adapted from /var/lib/libvirt/qemu.conf
<syntaxhighlight lang="bash">
# Note that AAVMF and OVMF are for Aarch64 and x86 respectively
curl localhost:9843
nvram = [ "/run/libvirt/nix-ovmf/AAVMF_CODE.fd:/run/libvirt/nix-ovmf/AAVMF_VARS.fd", "/run/libvirt/nix-ovmf/OVMF_CODE.fd:/run/libvirt/nix-ovmf/OVMF_VARS.fd" ]
</syntaxhighlight>
</syntaxhighlight>


== Tools ==
Mount an example share called <code>myshare</code> to the mountpoint <code>myshare.</code>
 
{{file|/etc/nixos/configuration.nix|nix|<nowiki>
services.davfs2 = {
  enable = true;
  settings.globalSection.ask_auth = 0;
};
 
fileSystems = {
  "/root/myshare" = {
    device = "http://localhost:9843/myshare";
    fsType = "davfs";
    options = [ "nofail" ];
  };
};
</nowiki>}}
 
=== Hooks ===
Libvirt allows the use of hooks to run custom scripts during specific events, such as daemon lifecycle events, domain lifecycle events, and network events. On NixOS, you can configure hooks via the NixOS module to automate the placement of hook scripts in the appropriate directories.
 
The following directories are used for placing hook scripts:
 
* '''<code>/var/lib/libvirt/hooks/daemon.d/</code>'''  Scripts here are triggered by daemon events like start, shutdown, and SIGHUP.
* '''<code>/var/lib/libvirt/hooks/qemu.d/</code>'''  Scripts for handling QEMU domain events such as begin, end, and migration.
* '''<code>/var/lib/libvirt/hooks/lxc.d/</code>'''  Scripts for LXC container events like begin and end.
* '''<code>/var/lib/libvirt/hooks/libxl.d/</code>'''  Scripts for Xen domains managed by <code>libxl</code> (begin/end events).
* '''<code>/var/lib/libvirt/hooks/network.d/</code>'''  Scripts triggered by network events such as begin and end.
 
See the [https://libvirt.org/hooks.html libvirt documentation] for more information.
 
An example config would be:<syntaxhighlight lang="nix">
{
  virtualisation.libvirtd.hooks = {
    daemon = {
      "example" = ./scripts/daemon-example.sh;
    };
    qemu = {
      "example" = ./scripts/qemu-example.sh;
    };
    network = {
      "example" = ./scripts/network-example.sh;
    };
  };
}
</syntaxhighlight>Note that after you added the configuration and switch, you'll have the following command to setup the hooks.<syntaxhighlight lang="bash">
systemctl start libvirtd-config.service
</syntaxhighlight>
 
=== PCI Passthrough ===
 
For detailed instructions on configuring PCI passthrough with libvirt, refer to the [[PCI passthrough]] page.
 
== Clients ==


NixOS provides some packages that can make use of libvirt or are useful with libvirt.
NixOS provides some packages that can make use of libvirt or are useful with libvirt.
Line 38: Line 248:


Following are notes regarding the use of some of those tools
Following are notes regarding the use of some of those tools
==== error: cannot find any suitable libguestfs supermin ====
Use use the package libguestfs-with-appliance. See https://github.com/NixOS/nixpkgs/issues/37540
=== guestfs-tools ===
Includes virt-sysprep, used to prepare a VM image for use.  Review the manpage of virt-sysprep, virt-clone, and virt-builder.


==== <code>virt-builder</code> ====
==== <code>virt-builder</code> ====


virt-builder is installed with <code>libguestfs</code>, but has some issues from its packaging.
virt-builder is installed with <code>guestfs-tools</code>, but has some issues from its packaging.


It is possible to work around those issues without modifying the package (when a pristine nixpkgs is needed).
It is possible to work around those issues without modifying the package (when a pristine nixpkgs is needed).
Line 54: Line 274:


This will make your user use the shipped repo configurations, and works around the fact that virt-builder reads its executable name to build its configuration path. The executable being wrapped, it is named differently.
This will make your user use the shipped repo configurations, and works around the fact that virt-builder reads its executable name to build its configuration path. The executable being wrapped, it is named differently.
[[Category:Virtualization]]


=== guestfs-tools ===
=== NixVirt ===


Includes virt-sysprep, used to prepare a VM image for use.  Review the manpage of virt-sysprep, virt-clone, and virt-builder.
[https://github.com/AshleyYakeley/NixVirt NixVirt] is a flake that provides NixOS and Home Manager modules for setting up libvirt domains, networks and pools declaratively.


=== Accessing QEMU VMs through Webbrowser ===
=== Accessing QEMU VMs through Webbrowser ===
Line 101: Line 320:
==== Get EyeOS Spice Web Client ====
==== Get EyeOS Spice Web Client ====


As said, the experience with the EyeOS Spice Web Client has been the best so far. Another client would be the [https://cgit.freedesktop.org/spice/spice-html5/ spice-html5] from freedesktop.org.
As said, the experience with the EyeOS Spice Web Client has been the best so far. Another client would be the [https://gitlab.freedesktop.org/spice/spice-html5/ spice-html5] from freedesktop.org.


1. Download the [https://github.com/eyeos/spice-web-client/ EyeOS Spice Web Client] and unpack it (if necessary) or , as example, just <code>git clone https://github.com/eyeos/spice-web-client/ /var/www/spice</code>
1. Download the [https://github.com/eyeos/spice-web-client/ EyeOS Spice Web Client] and unpack it (if necessary) or , as example, just <code>git clone https://github.com/eyeos/spice-web-client/ /var/www/spice</code>
Line 144: Line 363:


And finally you can access the VMs GUI through <code>https://mydomain.tld:4500/spice/index.html?host=mydomain.tld&port=5959</code>
And finally you can access the VMs GUI through <code>https://mydomain.tld:4500/spice/index.html?host=mydomain.tld&port=5959</code>
[[Category:Virtualization]]
[[Category:Applications]]