Articles of the VPS column:
- VPS: the cores trick
- VPS (second part): the SSD trick
- VPS (third part): evaluating performances
- VPS (fourth part): basic troubleshooting in Linux
Here we are at the first first stage of our guide dedicated to the choice of a Virtual Private Server cloud service: let’s discover all the secrets of cores and vCPU.
If you’re not satisfied with the offerings from Amazon, Google and Microsoft, all you need is a couple of Google searches to find out hundreds, even thousands, of different VPS (Virtual Private Server), public cloud and other IaaS (Infrastructure as a Service) services.
Prices are extremely heterogeneous and they spread from offerings lower than 20€/yr to several thousands of euros monthly. In most cases we’re dealing with a supply of Virtual Machines based most times on Linux, and sometime even on Windows. Even if we try to define some basics requisite (RAM, disk, cores and available bandwidth), offerings vary a lot in terms of price, even though services like backup or firewall are excluded. How is it possible? Are there some obscure, hidden differences that allow us to comprehend those differences? Or do we just pay the “brand”? What kind of supporting service is available? What is the availability of the service?
It’s pretty hard to come to certain conclusions, the previously mentioned parameters and factors certainly concur to determine the final price of a VPS, what is hard to choose is the service which is suitable to guarantee an acceptable service level and an appropriate support. It’s not fair to simplify; for instance, it’s wrong to believe that giants like Microsoft, Google or Amazon are necessarily the best services available: they suffered -in the past- some more or less significant problems and their offerings -more and more hard to decipher- hidden pitfalls and pieces of information that are sometime not clear.
Another mistake is to rely -for other than testing purposes- on uncertain reputation providers that sell out their service or adopt overselling policies. Obviously the risk is to run into with particularly high black-out spans, high latency time and performances way lower than the expected. The worst case is the sudden, unexpected ending of the service.
As in the choice of every cloud service, a key point to keep in mind is migration time of the service and data export when transitioning to a different public solution or to an alternative between the walls.
You clearly don’t want to find yourself hostage of your own service provider.
In this first article we will start by dealing with the number of “cores”, the most used parameter in VPS selling to express computing power. In the next article we will explain how to evaluate offerings based on RAM and I/O resources (disks, SSDs and storage) that are at disposal. We’ll analyze other parameters to evaluate with care, like licences, typology of support and other optional services.
Cores, those unknowns
The number of cores is certainly one of the most used parameters used in the selling of a VPS. Unfortunately it is a very generic term that can really mean all or nothing at all. On a hardware side, the computing power of a single core varies according to the processor: processors can significantly vary between the same Xeon generation, gaps are huge considering all of the Intel and AMD CPUs that can be found in in production servers.
Reality is even worse than expected. When we talk about cores of a VPS we don’t even talk about physical cores. First we need to consider the possible use of Intel’s Hyperthread technology (which is, essentially, the only manufacturer of server CPUs): a 6-core Xeon with hyperthread offers the Operative System 12 cores. In vSphere, for instance, to assign a VM all the cores of a single 6-core CPU, 12 vCPU, and not 6 cores, must be selected in the VM configuration settings. Therefore, a vCPU (virtual CPU) or a core used in virtual sphere equals to an half core of the physical CPU, not to the whole core.
If you then carefully read what is written in tiny characters on the contract that you have signed with your provider, you might discover that it’s not true at all that you have exclusive access to that half core, or vCPU. Core can actually be shared by assigning them to different VPS; on a user side it’s hard to have a proof of that on a performances level as long as the VPS with whom you share resources are fundamentally at idle or consume a few MHz. Even some medium/high tier providers don’t have a vCPU/core 1:1 ratio policy, but assign at least 2 VPS to each vCPU. Another widespread approach is to create some RAM and cores packages, so those who need lots of RAM have lots of cores and, conversely, who needs lots of cores must buy lots of RAM.
An original approach to computing power calculation was Amazon’s ECU. ECU stands for EC2 Compute Unite and it’s a value determined as the equivalent to a 1,7GHz 2006 Xeon. Fortunately by the summer of 2014 this unit has been abandoned and now Amazon too uses the term vCPU in the description of its VPS, or instances as the american giant describes them. For each typology of machine Amazon specifies what kind of processors is being used, to give a more precise definition of vCPU.
Expandable performances, slowed down VPS?
That doesn’t hold to all of the instances: t2 ones, the most used due to the appealing price, adopt a sophisticated CPU assign method called “expandable performances”. In essence, these VMs are not assigned a whole vCPU, but a percentage of it, which is 10% for t2.micro, 20% for t2.small, 40% for t2.medium and 60% for t2.large. The numbers are referred to a single core, for example if we talk about t2.medium that has 2 cores it means 20% of each core, or 40% of a single core on single threaded applications.
The aforementioned percentage is not fixed but tied to a number of credits that is assigned hourly to the VM. If the CPU is not being used it can collect credits to use to exceed the cited limits. Essentially the mechanism works only for loads limited in time, like unused Web server. If the CPU remains under load, the VM is constantly binded to the cited percentages, if it’s unloaded it can reach 100% of the assigned cores.
CPU without Turbo
It is not enough to declare the utilized CPU to have a precise data. According to some tests available on the Net about Azure’s A9 machine (which cost 3.000€/mo for the Windows version, 2.800€/mo for the Linux one), for example, Microsoft uses a Xeon E5-2670 Sandybridge-EP but disable Turbo Boost to save on energy cost, losing almost 40% on performances of single thread applications. That’s a big difference even considering that is a precise model of a processor! An interesting document that shows the details on processors and frequencies used by Azure VM can be found at this address (http://blogs.technet.com/b/stephw/archive/2015/06/01/details-of-the-azure-processors-for-vm-sizes.aspx) at the TechNet.
Take care of specifications
We just did a few examples citing some of the most famous cases, obviously we invite you to evaluate case by case the offerings of your cloud providers, trying to have access to details on how physical CPUs are effectively used on the machines that will host your applications. This is the only way to differentiate between apples and oranges and make a rational purchase based on your needs, perhaps saving money. An evaluation that must not be forgotten is the on-premises/housing comparison. The price of a VPS includes electricity, UPS systems and all the setup maintenance and management expenses. But when your calculations add up to the cited amounts (some thousands of euros monthly), you might find that putting your machines in a datacenter or even inside your own walls is the most convenient option, even though it’s more demanding and with less guaranteed results.
In spite of the “cloud” trend.
Next article: VPS (Second part): The SSD trick