6128117269_6f3bdb3bbf_z

IT departments have to walk a fine line between deploying too many servers and deploying too few. While it’s always necessary to have enough excess capacity to handle load spikes, having too much idle capacity can tie up financial resources that are better used elsewhere. Server virtualization was supposed to solve this problem, but in a recent GigaOm article, a discussion of CPU and server utilization revealed that even virtualized servers are failing to make efficient use of resources.

The reasons for this are fairly easy to understand. Firstly, the efficiency of server utilization is limited by I/O. Most modern servers have processors so powerful that they vastly outstrip both network bandwidth and disk I/O, leaving the CPU sitting idle for most of the time. The second issue concerns the nature of efficient server utilization itself. Properly interleaving workloads so that servers are maximally occupied is a very difficult problem.

What this means in practice is that businesses are paying for servers that they find impossible to use efficiently. Wasted server resources equate to wasted money.

While cloud technology, which at heart is virtualization technology, is currently failing to solve this at the data center level, it does provide an answer for end-users. Much of the advantage of cloud technology lies in its ability to shift the burden of managing infrastructure from end-users to cloud providers. Deploying cloud servers is usually as simple as interacting with a web interface, while hardware deployment and the details of server orchestration are left to the provider. The issue of server utilization can also be shifted in this way. As cloud interfaces abstract from the underlying infrastructure, they allow businesses to focus on deployment at a higher level.

Users of cloud platforms are able to spin up or down servers as they need them. They don’t have to worry about whether the bare metal underneath their virtualized infrastructure is being efficiently utilized. While the utilization problem does exist for users of cloud servers — it’s just as possible to spin up cloud servers and leave them underutilized — the management burden of server utilization is significantly lessened, because the problem is no longer one of deploying and using physical servers, which once procured must either be used efficiently or left idle.

The ease with which cloud servers can be deployed and deleted means that a properly managed cloud infrastructure is inherently more scalable. The issue is no longer one of efficiently utilizing fixed server resources to extract the maximum ROI from capital investment, but of only deploying those virtual  servers that are needed to satisfy current load requirements and a reasonable level of redundancy.

Cloud servers allow businesses to make intelligent resource allocation decisions and implement them with agility.