Jump to content

Proxmox Virtual Environment: Difference between revisions

From NixOS Wiki
imported>Tpwrules
update for recent versions and show configuration
Rtjure (talk | contribs)
No edit summary
 
(19 intermediate revisions by 9 users not shown)
Line 1: Line 1:
<blockquote>These instructions should work for Proxmox 7.2 and later with NixOS 22.05 and later. Users of previous versions may need to patch pve-container to use NixOS LXC images (instructions are below too).
[https://www.proxmox.com/proxmox-ve {{PAGENAME}}] - shortened ''PVE'' - ([[wikipedia:en:{{PAGENAME}}]]) is a platform for containerization and virtualization.
 
PVE is open source and is based on Debian GNU/Linux (with a customized kernel from Ubuntu) and supports a variety of filesystems (e.g.[[ZFS]]) and storage-backends/network-filesystems (e.g.[[Ceph]]). [[Ceph]] can be setup, administrated and monitored through the Webinterface, just as most other functions of PVE. There is also an API and a way to configure PVE through Configfiles and CLI-Commands. 
 
[[File:Proxmox-VE-8-0-Cluster-Summary.png|thumb|Proxmox-VE-8-0-Cluster-Summary]]
 
PVE can manage a "data center" as a cluster of machines and storage through a unified Webgui that allows management of the whole cluster through each of the nodes.
 
Proxmox VE uses
* [[#KVM]] for virtualization and
* [[#LXC]] for containerization.
NixOS runs on both.
 
<blockquote>
The instructions should work for PVE&nbsp;7.2 and later with NixOS&nbsp;22.05 and later.
</blockquote>
</blockquote>
= KVM =


It is possible to generate generic qcow2 images and attach them to VMs with <code>qm importdisk</code> as shown [https://pve.proxmox.com/wiki/Migration_of_servers_to_Proxmox_VE#Importing_to_Proxmox_VE here]
== Deploying Proxmox with NixOS ==


A better option is to generate a VMA image that can be imported as a VM on proxmox. With this method, many VM configuration options such as CPU, memory, netowrk interfaces, and serial terminals can be specified in nix instead of manually setting them on the proxmox UI.
The [https://github.com/SaumonNet/proxmox-nixos/ proxmox-nixos] project allows to run the Proxmox Hypervisor on top of NixOS.  


== Generating VMA ==
== KVM ==


<blockquote>The first run will take some time, as a patched version of qemu with support for the VMA format needs to be built
It is possible to generate generic qcow2 images and attach them to VMs with <code>qm importdisk</code> as shown [https://pve.proxmox.com/wiki/Migration_of_servers_to_Proxmox_VE#Importing_to_Proxmox_VE here].
 
A better option is to generate a VMA image that can be imported as a VM on Proxmox VE. With this method, many VM configuration options such as CPU, memory, network interfaces, and serial terminals can be specified in nix instead of manually setting them on the Proxmox UI.
 
=== Generating VMA ===
 
<blockquote>
The first run will take some time, as a patched version of qemu with support for the VMA format needs to be built
</blockquote>
</blockquote>
<pre>nix run github:nix-community/nixos-generators -- --format proxmox</pre>
<pre>
nix run github:nix-community/nixos-generators -- --format proxmox
</pre>
Pass additional nix configuration to the template with <code>--configuration filename.nix</code>. In addition to NixOS module options, proxmox-specific options present in [https://github.com/NixOS/nixpkgs/blob/master/nixos/modules/virtualisation/proxmox-image.nix nixos/modules/virtualisation/proxmox-image.nix] can be used to set core, memory, disk and other VM hardware options.
Pass additional nix configuration to the template with <code>--configuration filename.nix</code>. In addition to NixOS module options, proxmox-specific options present in [https://github.com/NixOS/nixpkgs/blob/master/nixos/modules/virtualisation/proxmox-image.nix nixos/modules/virtualisation/proxmox-image.nix] can be used to set core, memory, disk and other VM hardware options.


== Deploying on proxmox ==
=== Deploying on Proxmox VE ===


The generated vma.zst file can be copied to <code>/var/lib/vz/dump/</code> (or any other configured VM dump storage path). A new VM can be spun up from it either using the GUI or the CLI:
The generated vma.zst file can be copied to <code>/var/lib/vz/dump/</code> (or any other configured VM dump storage path). A new VM can be spun up from it either using the GUI or the CLI:


<pre>qmrestore /var/lib/vz/dump/vzdump-qemu-nixos-21.11.git.d41882c7b98M.vma.zst &lt;vmid&gt; --unique true</pre>
<pre>
<blockquote>note: the MAC accress of net0 defaults to <code>00:00:00:00:00:00</code>. This must either be overriden thruogh <code>proxmox.qemuConf.net0</code>, or the <code>unique</code> attribute must be set to true when importing the image on proxmox.
qmrestore /var/lib/vz/dump/vzdump-qemu-nixos-21.11.git.d41882c7b98M.vma.zst &lt;vmid&gt; --unique true
</pre>
<blockquote>
note: the MAC address of net0 defaults to <code>00:00:00:00:00:00</code>. This must either be overridden through <code>proxmox.qemuConf.net0</code>, or the <code>unique</code> attribute must be set to true when importing the image on Proxmox.
</blockquote>
</blockquote>
By default, the generated image is set up to expose a serial terminal interface for ease of access.
By default, the generated image is set up to expose a serial terminal interface for ease of access.


<pre>root@proxmox-server:~# qm start &lt;vmid&gt;
<pre>
root@proxmox-server:~# qm start &lt;vmid&gt;
root@proxmox-server:~# qm terminal &lt;vmid&gt;
root@proxmox-server:~# qm terminal &lt;vmid&gt;
starting serial terminal on interface serial0 (press Ctrl+O to exit)
starting serial terminal on interface serial0 (press Ctrl+O to exit)
Line 45: Line 71:




[root@nixos:~]#</pre>
[root@nixos:~]#
== Network configuration ==
</pre>
 
=== Network configuration ===


Cloud-init can be enabled with
Cloud-init can be enabled with


<pre>services.cloud-init.network.enable = true;</pre>
<pre>
services.cloud-init.network.enable = true;
</pre>
This will enable systemd-networkd, allowing cloud-init to set up network interfaces on boot.
This will enable systemd-networkd, allowing cloud-init to set up network interfaces on boot.


= LXC =
== LXC ==


== Generating LXC template ==
=== Generating LXC template ===


<pre>nix run github:nix-community/nixos-generators -- --format proxmox-lxc</pre>
<pre>
== Privileged LXCs ==
nix run github:nix-community/nixos-generators -- --format proxmox-lxc
</pre>
 
=== Privileged LXCs ===


While it’s not necessary, <code>proxmoxLXC.privileged</code> can be set to true to enable the DebugFS mount in privileged LXCs. If enabled on unprivileged LXCs, this will fail to mount.
While it’s not necessary, <code>proxmoxLXC.privileged</code> can be set to true to enable the DebugFS mount in privileged LXCs. If enabled on unprivileged LXCs, this will fail to mount.


== Network configuration ==
=== Network configuration ===
 
The proxmox LXC template uses systemd-networkd by default to allow network configuration by Proxmox. <code>proxmoxLXC.manageNetwork</code> can be set to true to disable this.


The proxmox LXC template uses systemd-networkd by default to allow network configuration by proxmox. <code>proxmoxLXC.manageNetwork</code> can be set to true to disable this.
=== Deploying on Proxmox VE ===


== deploying on proxmox ==
Copy the tarball to Proxmox, then create a new LXC with this template through the web UI or the CLI. The “nesting” feature needs to be enabled. Newer versions of Proxmox will have it enabled by default.


Copy the tarball to proxmox, then create a new LXC with this template through the web UI or the CLI. The “nesting” feature needs to be enabled. Newer versions of proxmox will have it enabled by default.
As of now, not all of the configuration options on the web UI work for Proxmox LXCs. Network configuration and adding SSH keys to root user work, while setting a password for the root user and setting hostname don’t.


As of now, not all of the configuration options on the web UI work for proxmox LXCs. Network configuration and adding SSH keys to root user work, while setting a password for the root user and setting hostname don’t.
It is suggested to set a root password within the container on first boot.


The template built above without any options does not come with <code>/etc/nixos/configuration.nix</code>. A minimal working example is presented below. Be sure to run <code>nix-channel --update</code> before <code>nixos-rebuild switch</code>.
The template built above without any options does not come with <code>/etc/nixos/configuration.nix</code>. A minimal working example is presented below. Be sure to run <code>nix-channel --update</code>, reboot the container running before <code>nixos-rebuild switch</code>.
<pre>{ pkgs, modulesPath, ... }:
<syntaxHighlight lang=nix>
{ pkgs, modulesPath, ... }:


{
{
Line 83: Line 119:
     pkgs.vim
     pkgs.vim
   ];
   ];
}</pre>
}
 
</syntaxHighlight>
 
== Patching pve-container ==
 
<blockquote>This is not needed on pve-container version 4.1-5 (released on 27 April 2022) and newer. If you have an older version, you will have to patch it to add NixOS LXC support. If you have a newer version, skip ahead to the next section.
</blockquote>
* install some dependencies <code>apt install git devscripts gdebi</code>
* clone https://github.com/proxmox/pve-container
 
<pre>root@pve:~# git clone https://github.com/proxmox/pve-container
...
root@pve:~# cd pve-container/</pre>
* get the installed version of pve-container
 
<pre>root@pve:~/pve-container# pveversion -v | grep pve-container
pve-container: 4.1-2</pre>
* check out the commit of the version you want


<pre>root@pve:~/pve-container# git log --grep &quot;4.1-2&quot;
=== LXC Console ===
commit 5d5f81f645bd1e8fd0ffff878fe249253e1be777
You may need to set the Console Mode option to /dev/console (instead of the default of "tty") in order to make the console shell work.
Author: Thomas Lamprecht &lt;t.lamprecht@proxmox.com&gt;
Date:  Fri Nov 12 19:21:25 2021 +0100


    bump version to 4.1-2
=== LXC See also ===


    Signed-off-by: Thomas Lamprecht &lt;t.lamprecht@proxmox.com&gt;</pre>
* earlier wiki page [[Proxmox Linux Container]]
<pre>root@pve:~/pve-container# git checkout 5d5f81f645bd1e8fd0ffff878fe249253e1be777
Note: switching to '5d5f81f645bd1e8fd0ffff878fe249253e1be777'.
...
HEAD is now at 5d5f81f bump version to 4.1-2</pre>
* cherry-pick the [https://github.com/proxmox/pve-container/commit/6226d0101652914744cb5c657414bf286ccd857d patch that adds NixOS LXC support]


<pre>root@pve:~/pve-container# git cherry-pick 6226d0101652914744cb5c657414bf286ccd857d
== Name ==
Auto-merging src/PVE/LXC/Config.pm
[detached HEAD 6f3cd03] Setup: add NixOS support
Author: Harikrishnan R via pve-devel &lt;pve-devel@lists.proxmox.com&gt;
Date: Tue Feb 15 22:58:46 2022 +0530
Committer: root &lt;root@pve&gt;
...
4 files changed, 50 insertions(+), 1 deletion(-)
create mode 100644 src/PVE/LXC/Setup/NixOS.pm</pre>
If the version of pve-container you’re applying the patch to is older than 4.1, it might encounter merge conflicts that would need to be manually resolved.


* Install build deps
''Proxmox Virtual Environment'' is also called
: short ''Proxmox VE'',
: shortened ''PVE'',
: just ''Proxmox''.


<pre>root@pve:~/pve-container# mk-build-deps
Proxmox is the firm of the company ''Proxmox Server Solutions GmbH''. Besides ''Proxmox Virtual Environment'' (''PVE'')<ref>https://pve.proxmox.com/</ref> there are other products called ''Proxmox Backup Server'' (''PBS'')<ref>https://pbs.proxmox.com/</ref> and ''Proxmox Mail Gateway'' (''PMG'')<ref>https://pmg.proxmox.com/</ref>.
...
The package has been created.
Attention, the package has been created in the current directory,
not in &quot;..&quot; as indicated by the message above!</pre>
<pre>root@pve:~/pve-container# gdebi pve-container-build-deps_4.1-2_all.deb
Reading package lists... Done
...
Fetched 432 MB in 6s (17.8 MB/s)
...
Unpacking pve-container-build-deps (4.1-2) ...
Setting up pve-container-build-deps (4.1-2) ...</pre>
* build the patched pve-container


<pre>root@pve:~/pve-container# make
== References ==
...
<references />
dpkg-buildpackage: info: binary-only upload (no source included)
lintian pve-container_4.1-2_all.deb
warning: running with root privileges is not recommended!</pre>
* install the deb


<pre>root@pve:~/pve-container# dpkg -i pve-container_4.1-2_all.deb</pre>
* verify that the installed pve-container package added NixOS support


<pre>root@pve:~# ls /usr/share/perl5/PVE/LXC/Setup/NixOS.pm
[[Category:Software]]
/usr/share/perl5/PVE/LXC/Setup/NixOS.pm</pre>
[[Category:Virtualization]]

Latest revision as of 21:31, 17 January 2025

Proxmox Virtual Environment - shortened PVE - (wikipedia:en:Proxmox Virtual Environment) is a platform for containerization and virtualization.

PVE is open source and is based on Debian GNU/Linux (with a customized kernel from Ubuntu) and supports a variety of filesystems (e.g.ZFS) and storage-backends/network-filesystems (e.g.Ceph). Ceph can be setup, administrated and monitored through the Webinterface, just as most other functions of PVE. There is also an API and a way to configure PVE through Configfiles and CLI-Commands.

Proxmox-VE-8-0-Cluster-Summary

PVE can manage a "data center" as a cluster of machines and storage through a unified Webgui that allows management of the whole cluster through each of the nodes.

Proxmox VE uses

  • #KVM for virtualization and
  • #LXC for containerization.

NixOS runs on both.

The instructions should work for PVE 7.2 and later with NixOS 22.05 and later.

Deploying Proxmox with NixOS

The proxmox-nixos project allows to run the Proxmox Hypervisor on top of NixOS.

KVM

It is possible to generate generic qcow2 images and attach them to VMs with qm importdisk as shown here.

A better option is to generate a VMA image that can be imported as a VM on Proxmox VE. With this method, many VM configuration options such as CPU, memory, network interfaces, and serial terminals can be specified in nix instead of manually setting them on the Proxmox UI.

Generating VMA

The first run will take some time, as a patched version of qemu with support for the VMA format needs to be built

nix run github:nix-community/nixos-generators -- --format proxmox

Pass additional nix configuration to the template with --configuration filename.nix. In addition to NixOS module options, proxmox-specific options present in nixos/modules/virtualisation/proxmox-image.nix can be used to set core, memory, disk and other VM hardware options.

Deploying on Proxmox VE

The generated vma.zst file can be copied to /var/lib/vz/dump/ (or any other configured VM dump storage path). A new VM can be spun up from it either using the GUI or the CLI:

qmrestore /var/lib/vz/dump/vzdump-qemu-nixos-21.11.git.d41882c7b98M.vma.zst <vmid> --unique true

note: the MAC address of net0 defaults to 00:00:00:00:00:00. This must either be overridden through proxmox.qemuConf.net0, or the unique attribute must be set to true when importing the image on Proxmox.

By default, the generated image is set up to expose a serial terminal interface for ease of access.

root@proxmox-server:~# qm start <vmid>
root@proxmox-server:~# qm terminal <vmid>
starting serial terminal on interface serial0 (press Ctrl+O to exit)

<<< NixOS Stage 1 >>>

loading module dm_mod...
running udev...
Starting version 249.4
.
.
.
[  OK  ] Reached target Multi-User System.


<<< Welcome to NixOS 21.11.git.d41882c7b98M (x86_64) - ttyS0 >>>

Run 'nixos-help' for the NixOS manual.

nixos login: root (automatic login)


[root@nixos:~]#

Network configuration

Cloud-init can be enabled with

services.cloud-init.network.enable = true;

This will enable systemd-networkd, allowing cloud-init to set up network interfaces on boot.

LXC

Generating LXC template

nix run github:nix-community/nixos-generators -- --format proxmox-lxc

Privileged LXCs

While it’s not necessary, proxmoxLXC.privileged can be set to true to enable the DebugFS mount in privileged LXCs. If enabled on unprivileged LXCs, this will fail to mount.

Network configuration

The proxmox LXC template uses systemd-networkd by default to allow network configuration by Proxmox. proxmoxLXC.manageNetwork can be set to true to disable this.

Deploying on Proxmox VE

Copy the tarball to Proxmox, then create a new LXC with this template through the web UI or the CLI. The “nesting” feature needs to be enabled. Newer versions of Proxmox will have it enabled by default.

As of now, not all of the configuration options on the web UI work for Proxmox LXCs. Network configuration and adding SSH keys to root user work, while setting a password for the root user and setting hostname don’t.

It is suggested to set a root password within the container on first boot.

The template built above without any options does not come with /etc/nixos/configuration.nix. A minimal working example is presented below. Be sure to run nix-channel --update, reboot the container running before nixos-rebuild switch.

{ pkgs, modulesPath, ... }:

{
  imports = [
    (modulesPath + "/virtualisation/proxmox-lxc.nix")
  ];

  environment.systemPackages = [
    pkgs.vim
  ];
}

LXC Console

You may need to set the Console Mode option to /dev/console (instead of the default of "tty") in order to make the console shell work.

LXC See also

Name

Proxmox Virtual Environment is also called

short Proxmox VE,
shortened PVE,
just Proxmox.

Proxmox is the firm of the company Proxmox Server Solutions GmbH. Besides Proxmox Virtual Environment (PVE)[1] there are other products called Proxmox Backup Server (PBS)[2] and Proxmox Mail Gateway (PMG)[3].

References