• A Guide to Server Virtualisation

Virtualisation could breathe new life into your IT architecture. This blog post explains what it is and what it can do for you.

Virtualisation isn’t new — it originally appeared on mainframes in the 60s — but it gained traction when server virtualisation company VMware started business 21 years ago. The technology solves what was a perennial problem for servers in the past: under-utilisation.

Smart administrators only run one enterprise application on a server at a time. This avoids several dangers. It’s an effective security measure, because you don’t want a compromised application reading data from another on the same machine. It’s also a way to improve reliability, because if one application crashes and destabilises the server, you don’t want it taking other applications with it.

The problem with running just one application on a server is that it rarely uses that server to its full capacity, unless perhaps you’re running batch jobs. This 2011 study showed servers running at around 36% CPU utilisation. That’s an awful lot of wasted CPU cycles, electricity, and capital investment.

Virtualisation solves that by recreating a server as a software file called a virtual machine (VM). The VM contains the operating system along with any applications you’re running. Because a VM is just a software file, you can run lots of them on a physical server at once while ensuring that none of them interferes with the others. Each of them takes a little of the server’s CPU and memory when they need it. Because they’re all working at once, running VMs increases the physical server’s CPU utilisation and reduces wastage through idle CPU time.

The VMs can’t just run directly on the server, though. They need a program called a hypervisor that relays communications between them and the physical server hardware. When a VM wants some memory or CPU time, it asks the hypervisor to access the physical server’s hardware on the VM’s behalf.

The hypervisor also keeps the VMs’ use of physical hardware resources separate. This means if a VM crashes, it won’t take the other VMs down with it. And if a VM gets infected by malware, that malware can’t creep into the memory that the other VMs are using (unless it spreads over a network — then, you’d need to segment your virtual network to protect the VMs from each other)

The main benefit of server virtualisation is efficiency and cost. Instead of buying ten physical servers to run 10 applications, you might only need to buy one, running 10 VMs. There are other benefits too, though, besides capital cost reduction.

One of these benefits is easier management. Replacing hardware with software makes it easier to manage with software-based policies. You can take snapshots of entire VMs and back them up, applications and all, preserving them in their last-used state. You can quickly provision entire new VMs from templates in just a few seconds. This is great for software developers who want to create new development and testing environments. And virtual servers are also useful in cloud environments, enabling you to spin up entire new VMs while owning no hardware at all.

VMware is the grandaddy of virtualisation companies, but Microsoft has made great gains with its Hyper-V virtualisation platform, built into Windows Server. The open source world also has its own virtualisation solution in the form of the Linux kernel-based virtual machine (KVM).

If you’re one of the few companies that hasn’t explored server virtualisation yet, it’s worth a look. It could transform your IT budget overnight. It’s also a must-have before you can enjoy the additional benefits that cloud computing and DevOps offer. 

Marketplace-CTA.jpg



 

Sorry to bother you... enjoying the article? Be the first to get breaking news, guides, reports and deals right to your inbox.

Subscribe now!

 Security code
 
 
*Compulsory fields.

Recent Articles