Virtual Computing Paradigm Shift

According to Moore’s Law, the number of transistors on a chip should at least double every 18 months. We have gone in a very brief time from a few thousand transistors to over a billion on a single CPU. This has created some amazing new desktop and server technologies. We can pickup a very capable workstation machine for only a few hundred dollars. This same technology was virtually nonexistant 10 years ago and cost thousands of dollars only a few years ago.

However, the real significance of this is that computing horsepower has become readily available. In particular, servers that are deployed in small and medium organizations are generally overpowered. Common tasks such as email, document sharing and printing are easily handled by the systems of 3-5 years ago. Today we have computers that can perform the work of many servers on a single system, and if you equip this same computer with another CPU you can even double that workload again.

All of this can be accomplished with emulators like VMWare and Microsoft VirtualPC. These applications are capable of loading multiple discrete operating systems on the same computer. Thus, on one server you could boot many other servers to perform discrete tasks.

Why wouldn’t you install all of the software without virtualization on that same host system? By maintaining separate server images, you can prevent collisions between disparate server tasks. Nearly every application wants to run a web site and some systems have very specific operating system support level requirements. There are potentially overlapping configuration files and system DLL’s/shared libraries that may need to be changed for your applications.

Using virtualization, you could commission a 3.4 GHZ server class system with Windows NT 4.0, Windows Server 2003, and Solaris x86 all at the same time. Each of these servers could have a unique name and provide web hosted applications on the default ports. Of course, they all have unique IP addresses and using multiple network cards they can each have a dedicated NIC.

Backups are simplified when the entire operating system is an image used by the virtual machine. Many of these products provide snapshot capability as well, so before critical operations you can snapshot the entire operating system and simply return to the last known image if anything should go wrong.

Now if something really does go wrong, the image simply needs to be available on another computer and it can be started. This allows warm spares without the need for clustering, but of course a true failover scenario is always possible even between two virtual machines on two different computers. Imagine a small business with a warm spare, they can start the virtual image on any workstation until the server can be fixed and brought back online.

The fact is, most servers sit idle in your organization. There is no reason they should not run at 30% CPU, as long as they have enough breathing room to support peak demand needs. As processor technology continues to evolve, our system deployments should consider maturing hardware accordingly. It is simply unacceptable that a high performance server experience a peak load of 0.1% and that corporate IT should have the privilege of upgrading that hardware every other year.