Proxmox Virtual Environment: Difference between revisions
m Reorder list of PVE uses to match order of sections |
|||
(One intermediate revision by the same user not shown) | |||
Line 1: | Line 1: | ||
[https://www.proxmox.com/proxmox-ve {{PAGENAME}}] - shortened ''PVE'' - ([[wikipedia:en:{{PAGENAME}}]]) is a platform for containerization and virtualization. PVE can manage a so called "data center" as a cluster of machines and storage. (It supports file systems like [[ZFS]] and [[Ceph]].) Mostly it can be used with a [[wikipedia:en:web user interface|WUI]]. It is open source and is based on [[Debian]] GNU/Linux (with a customized kernel of [[Ubuntu]]). | [https://www.proxmox.com/proxmox-ve {{PAGENAME}}] - shortened ''PVE'' - ([[wikipedia:en:{{PAGENAME}}]]) is a platform for containerization and virtualization. PVE can manage a so called "data center" as a cluster of machines and storage. (It supports file systems like [[ZFS]] and [[Ceph]].) Mostly it can be used with a [[wikipedia:en:web user interface|WUI]]. It is open source and is based on [[Debian]] GNU/Linux (with a customized kernel of [[Ubuntu]]). | ||
Proxmox VE uses | |||
* [[# | * [[#KVM]] for virtualization and | ||
* [[# | * [[#LXC]] for containerization. | ||
NixOS runs on both. | NixOS runs on both. | ||
Line 14: | Line 14: | ||
It is possible to generate generic qcow2 images and attach them to VMs with <code>qm importdisk</code> as shown [https://pve.proxmox.com/wiki/Migration_of_servers_to_Proxmox_VE#Importing_to_Proxmox_VE here]. | It is possible to generate generic qcow2 images and attach them to VMs with <code>qm importdisk</code> as shown [https://pve.proxmox.com/wiki/Migration_of_servers_to_Proxmox_VE#Importing_to_Proxmox_VE here]. | ||
A better option is to generate a VMA image that can be imported as a VM on | A better option is to generate a VMA image that can be imported as a VM on Proxmox VE. With this method, many VM configuration options such as CPU, memory, network interfaces, and serial terminals can be specified in nix instead of manually setting them on the Proxmox UI. | ||
=== Generating VMA === | === Generating VMA === | ||
Line 26: | Line 26: | ||
Pass additional nix configuration to the template with <code>--configuration filename.nix</code>. In addition to NixOS module options, proxmox-specific options present in [https://github.com/NixOS/nixpkgs/blob/master/nixos/modules/virtualisation/proxmox-image.nix nixos/modules/virtualisation/proxmox-image.nix] can be used to set core, memory, disk and other VM hardware options. | Pass additional nix configuration to the template with <code>--configuration filename.nix</code>. In addition to NixOS module options, proxmox-specific options present in [https://github.com/NixOS/nixpkgs/blob/master/nixos/modules/virtualisation/proxmox-image.nix nixos/modules/virtualisation/proxmox-image.nix] can be used to set core, memory, disk and other VM hardware options. | ||
=== Deploying on | === Deploying on Proxmox VE === | ||
The generated vma.zst file can be copied to <code>/var/lib/vz/dump/</code> (or any other configured VM dump storage path). A new VM can be spun up from it either using the GUI or the CLI: | The generated vma.zst file can be copied to <code>/var/lib/vz/dump/</code> (or any other configured VM dump storage path). A new VM can be spun up from it either using the GUI or the CLI: | ||
Line 34: | Line 34: | ||
</pre> | </pre> | ||
<blockquote> | <blockquote> | ||
note: the MAC address of net0 defaults to <code>00:00:00:00:00:00</code>. This must either be overridden through <code>proxmox.qemuConf.net0</code>, or the <code>unique</code> attribute must be set to true when importing the image on | note: the MAC address of net0 defaults to <code>00:00:00:00:00:00</code>. This must either be overridden through <code>proxmox.qemuConf.net0</code>, or the <code>unique</code> attribute must be set to true when importing the image on Proxmox. | ||
</blockquote> | </blockquote> | ||
By default, the generated image is set up to expose a serial terminal interface for ease of access. | By default, the generated image is set up to expose a serial terminal interface for ease of access. | ||
Line 87: | Line 87: | ||
=== Network configuration === | === Network configuration === | ||
The proxmox LXC template uses systemd-networkd by default to allow network configuration by | The proxmox LXC template uses systemd-networkd by default to allow network configuration by Proxmox. <code>proxmoxLXC.manageNetwork</code> can be set to true to disable this. | ||
=== | === Deploying on Proxmox VE === | ||
Copy the tarball to | Copy the tarball to Proxmox, then create a new LXC with this template through the web UI or the CLI. The “nesting” feature needs to be enabled. Newer versions of Proxmox will have it enabled by default. | ||
As of now, not all of the configuration options on the web UI work for | As of now, not all of the configuration options on the web UI work for Proxmox LXCs. Network configuration and adding SSH keys to root user work, while setting a password for the root user and setting hostname don’t. | ||
It is suggested to set a root password within the container on first boot. | It is suggested to set a root password within the container on first boot. |
Latest revision as of 15:39, 14 December 2024
Proxmox Virtual Environment - shortened PVE - (wikipedia:en:Proxmox Virtual Environment) is a platform for containerization and virtualization. PVE can manage a so called "data center" as a cluster of machines and storage. (It supports file systems like ZFS and Ceph.) Mostly it can be used with a WUI. It is open source and is based on Debian GNU/Linux (with a customized kernel of Ubuntu).
Proxmox VE uses
NixOS runs on both.
The instructions should work for PVE 7.2 and later with NixOS 22.05 and later.
KVM
It is possible to generate generic qcow2 images and attach them to VMs with qm importdisk
as shown here.
A better option is to generate a VMA image that can be imported as a VM on Proxmox VE. With this method, many VM configuration options such as CPU, memory, network interfaces, and serial terminals can be specified in nix instead of manually setting them on the Proxmox UI.
Generating VMA
The first run will take some time, as a patched version of qemu with support for the VMA format needs to be built
nix run github:nix-community/nixos-generators -- --format proxmox
Pass additional nix configuration to the template with --configuration filename.nix
. In addition to NixOS module options, proxmox-specific options present in nixos/modules/virtualisation/proxmox-image.nix can be used to set core, memory, disk and other VM hardware options.
Deploying on Proxmox VE
The generated vma.zst file can be copied to /var/lib/vz/dump/
(or any other configured VM dump storage path). A new VM can be spun up from it either using the GUI or the CLI:
qmrestore /var/lib/vz/dump/vzdump-qemu-nixos-21.11.git.d41882c7b98M.vma.zst <vmid> --unique true
note: the MAC address of net0 defaults to
00:00:00:00:00:00
. This must either be overridden throughproxmox.qemuConf.net0
, or theunique
attribute must be set to true when importing the image on Proxmox.
By default, the generated image is set up to expose a serial terminal interface for ease of access.
root@proxmox-server:~# qm start <vmid> root@proxmox-server:~# qm terminal <vmid> starting serial terminal on interface serial0 (press Ctrl+O to exit) <<< NixOS Stage 1 >>> loading module dm_mod... running udev... Starting version 249.4 . . . [ OK ] Reached target Multi-User System. <<< Welcome to NixOS 21.11.git.d41882c7b98M (x86_64) - ttyS0 >>> Run 'nixos-help' for the NixOS manual. nixos login: root (automatic login) [root@nixos:~]#
Network configuration
Cloud-init can be enabled with
services.cloud-init.network.enable = true;
This will enable systemd-networkd, allowing cloud-init to set up network interfaces on boot.
LXC
Generating LXC template
nix run github:nix-community/nixos-generators -- --format proxmox-lxc
Privileged LXCs
While it’s not necessary, proxmoxLXC.privileged
can be set to true to enable the DebugFS mount in privileged LXCs. If enabled on unprivileged LXCs, this will fail to mount.
Network configuration
The proxmox LXC template uses systemd-networkd by default to allow network configuration by Proxmox. proxmoxLXC.manageNetwork
can be set to true to disable this.
Deploying on Proxmox VE
Copy the tarball to Proxmox, then create a new LXC with this template through the web UI or the CLI. The “nesting” feature needs to be enabled. Newer versions of Proxmox will have it enabled by default.
As of now, not all of the configuration options on the web UI work for Proxmox LXCs. Network configuration and adding SSH keys to root user work, while setting a password for the root user and setting hostname don’t.
It is suggested to set a root password within the container on first boot.
The template built above without any options does not come with /etc/nixos/configuration.nix
. A minimal working example is presented below. Be sure to run nix-channel --update
, reboot the container running before nixos-rebuild switch
.
{ pkgs, modulesPath, ... }: { imports = [ (modulesPath + "/virtualisation/proxmox-lxc.nix") ]; environment.systemPackages = [ pkgs.vim ]; }
LXC Console
You may need to set the Console Mode option to /dev/console (instead of the default of "tty") in order to make the console shell work.
LXC See also
- earlier wiki page Proxmox Linux Container
Name
Proxmox Virtual Environment is also called
- short Proxmox VE,
- shortened PVE,
- just Proxmox.
Proxmox is the firm of the company Proxmox Server Solutions GmbH. Besides Proxmox Virtual Environment (PVE)[1] there are other products called Proxmox Backup Server (PBS)[2] and Proxmox Mail Gateway (PMG)[3].