When someone talks about server virtualization, one immediately thinks to VMware vSphere and, perhaps, to Microsoft Hyper-V or to Citrix XenServer or Red Hat Virtualization (KVM).
Proxmox Virtual Environment (VE) is an less known and titled alternative, but still valid and original.

proxmox logo 2

UPDATE: on the 12th of December 2015, after our test, the version 4.1 has been released, based on the latest Debian Jessie with Kernel 4.2.6, LXC and QEMU 2.4.1. Some bugs have been solved as well as the integration with ZFS, several functions about LXC containers.

Proxmox is an open source project based on KVM and -starting from the new 4.0 version- on LXC (Linux Containers): it’s free but the company that develops it offers a paid commercial support.
It’s a Debian-based hypervisor that uses a modified version of the Red Hat Enterprise Linux (RHEL) kernel and it’s available as an ISO image to be bare-metal installed on a physical host.
The management interface is Web based only and doesn’t require a server or a VM dedicated to the management. As the vast majority of Linux-based products, sometimes the use of the command line is needed to perform some advanced operations. The available documentation is scarce but there are some detailed guides with all the main operations on the dedicated Wiki, however the levels is quite far from those of projects like vSphere or Hyper-V.

Proxmox has two different virtual machines (VM) typologies: Full Virtualization with the use of KVM (Kernel-Based Virtual Machine) and Container-based.
KVM VMs are the classic autonomous VMs with dedicated operating system and resources, whereas the use of Containers allows to implement a virtualization on an operating system offering several isolated Linux systems on a single host.
Starting with the release 4.0, Containers are managed using LXC instead of OpenVZ that has been abandoned.

1 cluster view

As the best IaaS platforms do, Proxmox allows the creation of multi-nodes clusters to implement High Availability -HA-: this feature, for instance, is managed by means of the Web interface but it has to be set with the command line. The multi-master architecture makes it possible not to have nodes dedicated to the centralized management exclusively, thus offering a saving in terms of resources and avoiding the presence of a Single Point of Failure (SPOF). Indeed, when accessing one of the hosts with the Web client, it’s possible to see and manage all the other hosts inside that very same cluster. With a configuration of such kind, we can also implement HA on a singular basis for both VMs and Containers, provided there’s a dedicated shared storage. Similarly to vSphere, Proxmox supports local disks (with LVM or ZFS file systems), network storage through NFS, iSCSI and network file systems like GlusterFS. When configuring a cluster, it’s remarkable to note that the Ceph file system can be used for a shared storage; Ceph however requires a dedicated management server (although it can be installed on a node of the cluster in small environments).

The platform supports multiple snapshots and includes a native backup function with schedule capability, for each single VM. Live migration is not missing as well: you can migrate a VM without downtime even to different storages (to be clear, the equivalent of vMotion and Storage vMotion with VMware).

Networking is structured on the bridged model, like vSphere; a bridge is the software equivalent of a physical switch where different virtual machines can be connected to and share the communication channel. Each host supports up to 4096 bridges and the access to the external world is guaranteed by associating bridges to physical network ports on the host. Network management can be performed with the dedicated voice in the menu of the Web interface.

Installation and usage

The installation procedure is quite simple, during the the process we can choose between the default settings and custom settings like local storage configuration with the ZFS file system (this solution is useful when we prefer a software-based disk management instead of an hardware RAID with the physical controller). The addition of network storage (in our case, NFS) is done with a few steps, thus making it visible to the single host or to the whole cluster.

2 add nfs1

Once the Proxmox setup is done on each node, we created a cluster, a requirement for HA. This feature is implemented via command line on each node; once the cluster is created on the first node, all you need is to ‘’add’’ each other host and watch them to show in the left column of the Web interface. Contrary to what it may be obvious, there are just two simple command for these 2 operations: pvecm create [cluster-name] (to create the cluster on whichever node) and pvecm add [cluster-IP-address] (to add nodes to the cluster - the following image shows the procedure).

3 create cluster

As previously hinted, the multi-master nature of Proxmox allows for a completely decentralized management, indeed we can perform actions on the various hosts in the cluster in the same way regardless of which one we are connected to.

Those who are familiar with other IaaS platforms like vSPhere, the creation of virtual machines is structured in a similar way: we can assign resources (disk, RAM, network cards, etc..) and link the VM to an ISO of the operating system to install. Direct connection to the VMs is possible thanks to the HTML5 console ‘noVNC’ (or SPICE as an alternative) included, or integrated in the web interface that opens in an autonomous window.

container 3

As far as Containers are concerned, their creation is fundamentally different when assigning the desired operating system. Because of the nature of Containers themselves, only the part relative to network configuration, DNS and hardware resources is like in classic VMs, but instead of a bootable operating system we need to provide Proxmox with a dedicated Template. Templates are pre-configured operating system images that can be downloaded straight from the main management interface ready to be implemented in a Container. Most of the documentation regarding this topic still refers to OpenVZ templates (for which a dedicated repository was available); with the introduction of LXC, the download is made directly from the host storage management page with the ‘Template’ button, which shows the ones available at the moment for LXC (Ubuntu 14.04, Arch-base, CentOS 6, Debian 7.0 etc).


ha1

When using HA it's necessary to tell Proxmox which virtual machines are included; a voice in the dedicated menu is available in the Web management console, but, of course, there’s a dedicated command line command. Proxmox offers an HA simulator, a feature that allows to test high availability inside a simulated environment.

Our test

Our testing lab was made configuring a cluster of three HP servers (different generations, Proliant DL360 and DL380), connected to the same subnet and equipped with local storage onto which Proxmox was installed (installation on SD cards or USB pendrives is supported in test labs only, not in production environments).

Two of the nodes have been configured with the standard procedure without using any advanced configuration; on the third server we tried to use local storage using the ZFS file system without any installation problem. Apart from the different disk management between hardware RAID via controller and software RAID with ZFS, the use of the memorization space available on the node resulted completely transparent. The use of virtual machines is comparable to offers of the more famous solutions: the noVNC console allow to interact in a relatively comfortable way even with desktop system. Access with SSH or Web interface to the VM is completely transparent with respect to the hypervisor. We also tried without any problem cloning and template creation. The following migration of virtual machines and disks from one host to the other raised no problems at all, and using the Web interface the procedure is pretty easy as it just asks to indicate the destination host (in case of disk migration we can also specify whether to keep or remove the original disk).

migrate1

Container management is slightly more articulated: the shift from OpenVZ to LXC has made necessary the conversion of the ones previously created (a dedicated guide is available on the Wiki), furthermore, the offline migration has been introduced only with the release 4.0 beta2. Although Proxmox best practices ask for a dedicated network port (and related virtual networking) for VMs and Containers migration, we carried out our tests using one single physical port on the host, on which the virtual configuration was mapped to.

We’d like to signal a bug we’ve found when testing HA, still using Enterprise level HP servers (onto which we have tried several ESXi version -up to the latest 6.0- without any compatibility problem).
A Linux bug in the connection between the system and the iLO (the management service of HP servers), needed to check the current state of the physical host and identify potential migration situations, crashes the nodes of the cluster on which the HA feature is activated upon a VM (specifically, a kernel panic due to the hpwdt module). Fixing this issue requires to disable the watchdog service.

Why choosing Proxmox

Proxmox is a free solution and quite complete for the management of virtual infrastructures in small and medium dimension environments, but before jumping into this project it’s good to ponder upon certain side effects. The main deficiency is the lack of a vast and complete ecosystem of softwares, services, documentation and support for the whole platform, unlike with Microsoft an VMware, as well as the high reliability level required in production environments, perhaps with big dimensions.
The bug we found during the configuration of High Availability is an example that made us to reflect on the possible consequences of the adoption of this solution in critical systems.
Considering the recent, and quite pronounced, trend of the IT market towards Containers, Proxmox is a great starting point to get acquainted to the topic. Indeed starting to work with this method of virtualization is really simple and intuitive, even without an in-depth study about it and a dedicated infrastructure.
Proxmox is a project rich of potentialities and with a wide margin of improvement for sure, however we’d like you carefully ponder about all the possible implications before adopting it in long-term projects, or in projects that require a greater level of durability and reliability.

About the Author

Lorenzo Bedin e Riccardo Gallazzi

Lorenzo Bedin

Laureato in Ingegneria delle Telecomunicazioni, svolge l'attività di libero professionista come consulente IT, dopo un periodo di formazione e esperienza in azienda nel ruolo di sistemista Windows e Linux. Si occupa di soluzioni hardware, siti web e virtualizzazione.

Riccardo Gallazzi

Sistemista JR, tra i suoi campi di interesse maggiori si annoverano virtualizzazione con vSphere e Proxmox e gestione ambienti Linux. È certificato VMware VCA for Data Center Virtualization.

More articles from this author

banner eng

fb icon evo twitter icon evo

Word of the Day

The term Edge Computing refers, when used in the cloud-based infrastructure sphere, the set of devices and technologies that allows...

>

The acronym SoC (System on Chip) describes particular integrated circuit that contain a whole system inside a single physical chip:...

>

The acronym PtP (Point-to-Point) indicates point-to-point radio links realized with wireless technologies. Differently, PtMP links connects a single source to...

>

Hold Down Timer is a technique used by network routers. When a node receives notification that another router is offline...

>

In the field of Information Technology, the term piggybacking refers to situations where an unauthorized third party gains access to...

>
Read also the others...

Download of the Day

Netcat

Netcat is a command line tool that can be used in both Linux and Windows environments, capable of...

>

Fiddler

Fiddler is a proxy server that can run locally to allow application debugging and control of data in...

>

Adapter Watch

Adapter Watch is a tool that shows a complete and detailed report about network cards. Download it here.

>

DNS DataView

DNS DataView is a graphical-interface software to perform DNS lookup queries from your PC using system-defined DNS, or...

>

SolarWinds Traceroute NG

SolarWinds Traceroute NG is a command line tool to perform advanced traceroute in Windows environment, compared to the...

>
All Download...

Issues Archive

  •  GURU advisor: issue 21 - May 2019

    GURU advisor: issue 21 - May 2019

  • GURU advisor: issue 20 - December 2018

    GURU advisor: issue 20 - December 2018

  • GURU advisor: issue 19 - July 2018

    GURU advisor: issue 19 - July 2018

  • GURU advisor: issue 18 - April 2018

    GURU advisor: issue 18 - April 2018

  • GURU advisor: issue 17 - January 2018

    GURU advisor: issue 17 - January 2018

  • GURU advisor: issue 16 - october 2017

    GURU advisor: issue 16 - october 2017

  • GURU advisor: issue 15 - July 2017

    GURU advisor: issue 15 - July 2017

  • GURU advisor: issue 14 - May 2017

    GURU advisor: issue 14 - May 2017

  • 1
  • 2
  • 3
  • BYOD: your devices for your firm

    The quick evolution of informatics and technologies, together with the crisis that mined financial mines, has brought to a tendency inversion: users that prefer to work with their own devices as they’re often more advanced and modern than those the companies would provide. Read More
  • A switch for datacenters: Quanta LB4M

    You don’t always have to invest thousands of euros to build an enterprise-level networking: here’s our test of the Quanta LB4M switch Read More
  • Mobile World Congress in Barcelona

    GURU advisor will be at the Mobile World Congress in Barcelona from February 22nd to 25th 2016!

    MWC is one of the biggest conventions about the worldwide mobile market, we'll be present for the whole event and we'll keep you posted with news and previews from the congress.

    Read More
  • 1