Open Source Server Virtualization with KVM and LXC
Proxmox VE uses a Linux kernel and is based on the Debian GNU/Linux Distribution. The source code of Proxmox VE is released under the GNU Affero General Public License, version 3 (GNU AGPL, v3). This means that you are free to inspect the source code at any time or contribute to the project yourself.
Using open source software guarantees full access to all functionalities - as well as high security and reliability. Everybody is encouraged to contribute while Proxmox ensures the product always meets professional quality criteria.
Kernel-based Virtual Machine (KVM)
Open source hypervisor KVM is a full virtualization solution for Linux on x86 hardware containing virtualization extensions (Intel VT or AMD-V). It is a kernel module added to mainline Linux.
With KVM you can run multiple virtual machines by running unmodified Linux or Windows images. It enables users to be agile by providing robust flexibility and scalability that fit their specific demands. Proxmox Virtual Environment uses KVM virtualization since the beginning of the project in 2008, since 0.9beta2.
Containers are a lightweight alternative to full machine virtualization offering lower overhead.
Linux Containers (LXC)
LXC is an operating-system-level virtualization environment for running multiple isolated Linux systems on a single Linux control host. LXC works as userspace interface for the Linux kernel containment features. Linux users can easily create and manage system or application containers with a powerful API and simple tools.
Move your running virtual machines from one physical host to another without any downtime.
Open Virtualization Alliance
Proxmox Server Solutions GmbH is a participating member of the Open Virtualization Alliance, an industry-consortium fostering the adoption of KVM as an enterprise-ready open virtualization solution. OVA is a Linux Foundation Collaborative Project.
The Linux Foundation is a non-profit consortium dedicated to fostering the growth of Linux. As a member of the Linux Foundation, Proxmox supports the advancement of Linux and KVM.
Unique Multi-master Design
The integrated web-based management interface gives you a clean overview of all your KVM guests and Linux containers and even of your whole cluster. You can easily manage your VMs and containers, storage or cluster from the GUI. There is no need to install a separate, complex, and pricy management server.
Proxmox Cluster File System (pmxcfs)
Proxmox VE uses the unique Proxmox Cluster file system (pmxcfs), a database-driven file system for storing configuration files. This enables you to store the configuration of thousands of virtual machines. By using corosync, these files are replicated in real time on all cluster nodes. The file system stores all data inside a persistent database on disk, nonetheless, a copy of the data resides in RAM which provides a maximum storage size is 30MB - more than enough for thousands of VMs.
Proxmox VE is the only virtualization platform using this unique cluster file system.
Rich app Management Tool
For advanced users who are used to the comfort of the Unix shell or Windows Powershell, Proxmox VE provides a command line interface to manage all the components of your virtual environment. This command line interface has intelligent tab completion and full documentation in the form of UNIX man pages.
REST web API
Proxmox VE uses a RESTful API. We choose JSON as primary data format, and the whole API is formally defined using JSON Schema. This enables fast and easy integration for third party management tools like custom hosting environments.
You can define granular access for all objects (like VM�s, storages, nodes, etc.) by using the role based user- and permission management. This allows you to define privileges and helps you to control access to objects. This concept is also known as access control lists: Each permission specifies a subject (a user or group) and a role (set of privileges) on a specific path.
Proxmox VE supports multiple authentication sources like Microsoft Active Directory, LDAP, Linux PAM standard authentication or the built-in Proxmox VE authentication server.
Backup and Restore
The integrated backup tool (vzdump) creates consistent snapshots of running containers and KVM guests. It basically creates an archive of the VM or CT data and also includes the VM/CT configuration files.
KVM live backup works for all storage types including VM images on NFS, iSCSI LUN, Ceph RBD or Sheepdog. The new backup format is optimized for storing VM backups fast and effective (sparse files, out of order data, minimized I/O).
Proxmox VE High Availability Cluster
A multi-node Proxmox VE HA Cluster enables the definition of highly available virtual servers. The Proxmox VE HA Cluster is based on proven Linux HA technologies, providing stable and reliable HA service.
Proxmox VE HA Manager
During deployment, the resource manager called Proxmox VE HA Manager monitors all virtual machines and containers on the whole cluster and automatically gets into action if one of them fails. The Proxmox VE HA Manager requires zero configuration, it works out of the box. Additionally, watchdog-based fencing simplifies deployments dramatically.
For easy handling the whole Proxmox VE HA Cluster settings can be configured via the integrated web-based GUI.
Proxmox VE Simulator
The integrated Proxmox VE HA Simulator enables you to learn all HA functionality and test your setup prior to going into production.
Proxmox VE uses a bridged networking model. All VMs can share one bridge as if virtual network cables from each guest were all plugged into the same switch. For connecting VMs to the outside world, bridges are attached to physical network cards assigned a TCP/IP configuration.
For further flexibility, VLANs (IEEE 802.1q) and network bonding/aggregation are possible. In this way it is possible to build complex, flexible virtual networks for the Proxmox VE hosts, leveraging the full power of the Linux network stack.
The Proxmox VE storage model is very flexible. Virtual machine images can either be stored on one or several local storages or on shared storage like NFS and on SAN. There are no limits, you may configure as many storage definitions as you like. You can use all storage technologies available for Debian Linux.
The benefit of storing VMs on shared storage is the ability to live-migrate running machines without any downtime.
Via the web interface you can add the following storage types:
Network storage types supported
Local storage types supported
- LVM Group (network backing with iSCSI targets)
- iSCSI target
- NFS Share
- Ceph RBD
- Direct to iSCSI LUN
- LVM Group (local backing devices like block devices, FC devices, DRBD, etc.)
- Directory (storage on existing filesystem)