Is virtualization a step backwards?

Categories: English Geeky

A note on Slashdot says that vApp, [is] a tool that will allow developers to ‘encapsulate the entire app infrastructure in a single bundle — servers and all.’ Indeed part of the push with virtualization is that you can have an application running on its own instance of the operating system, and share the hardware resources between many such app/OS “bundles”.

I think this way of seeing things is dangerous! Let’s analyze history for a bit. First, application programs ran standalone on a computer. As more and more programs began to appear, it became clear and obvious that they all required several common services: memory management, input/output, disk access, printing, graphics routines, and so on. Thus operating systems were born, where the OS would handle these common tasks and free application programmers from having to do that. An added benefit is that the OS could arbitrate access to these resources and enable multitasking of several applications, since all the apps talk to the OS through APIs and need not concern themselves with low-levelness.

Then beasts such as Windows appeared. Both the OS and the applications that use it are so brain-dead, that most vendors who sell server-grade Windows applications basically require that each app has its own dedicated server on a standalone Windows installation.

This of course is ridiculous and byzantine. This is where VMware came in and realized that a typical organization could have say, 10 servers each running at 5% usage, each with a mission critical application that absolutely must be on its own on this server. And they said “well how about we run 10 instances of Windows, isolated from each other through virtualization, and then we can have a single box at 50% usage running all 10 apps the way they want to”.

This is indeed the bread-and-butter of VMWare. But beware! are you noticing a trend here? by “demoting” each OS/app set to an “app bundle” status, VMWare is indeed taking a step backwards. Okay, so they want VMWare ESX to take the place of the traditional OS, and have each application/OS running on its own. This looks suspiciously familiar to the “app has to do everything by itself” model we escaped from a couple of decades ago!

Sure, as an application programmer I was freed from having to program my own routines for a lot of tasks (for systems such as Mac OS or a decent Linux graphical environment, the libraries free me from a LOT of mundane chores). However, the second killer advantage of an OS providing services is efficiency; this means one piece of software providing access to all applications; I run one OS for all my apps and save on memory, disk space and CPU cycles.

By moving the actual OS (VMWare) down, it provides only very basic services to the “apps” on top (the OS). So indeed, every app carries a gigantic “library” of functions since, in effect, this library is now an entire operating system. The overhead for having several copies of the OS running is gigantic; each Windows installation takes up a couple of gigabytes, while consuming a few hundred megabytes of RAM and a fair share of CPU cycles. On startup, you have 10 copies of Windows, all performing the exact same bootup sequence and reading the same files (albeit from different disk locations, so no caching performance boost).

Worst of all, without proprietary hacks, you also lose the important benefit of interprocess communications. After all, and this is one of VMWare’s purported benefits, each app is isolated from the others, by virtue of running under its own OS instance.

So who is the culprit here? Sure, poorly programmed Windows applications which can’t work without littering your entire hard drive with DLLs and barf if another unknown process is running at the same time, have most of the blame. But this trend is spreading to other operating systems (Zimbra, I’m looking at you). A huge step backwards looms over us, once developers begin to think “hey, I can actually take control of the entire operating system and have it bent to my app’s will and requirement; after all, if the user has a problem with that, he can always virtualize my app and OS”.

What is needed is to go back to well-behaved applications, ones that are designed from the ground up to play well with others, and that by this very design trait, do not interfere with others.

I realize that this might be difficult; after all, with all the dependencies between system components, it might be understandable that my app’s database configuration requirements might break another’s. But then again, the solution is NOT to run two apps with TWO separate databases on TWO different operating systems. Either I find a way to NOT require my app to mess things up, or I provide with a non-system-wrecking component that gives me the service I want. Sure, it’d be a pain in the ass to run two instances of SQL Server, each on a different directory and on a different port, but it beats running two entire copies of Windows. Or wait, wasn’t Windows stable enough for this already?

Still, I think it’s a matter of politeness and cooperation between developers, to not require me to wreck my OS or virtualize in order to run an application. The reasons for virtualization must be different: consolidation of workloads, isolation for security or experimentation purposes, ease of deployment/restoration in case of disaster. Because, hey, do you all remember when everybody was saying “one of the advantages of Windows is that developers don’t have to develop printing, graphics, file access, GUIs and sound separately for each app and for each piece of hardware out there! the OS gives us that service” ? .

Sure developers deserve a break; that’s no excuse to be lazy, and you should think of us, sysadmins of the world, who also have to care for and feed the operating system instances on which your apps run. And trust me, each OS instance, however virtual it might be, still counts as a separate server, with the same care & feeding needs as if it were a standalone box. And however cool it might sound, trust me, I’d rather not wrestle with 150 virtual servers, when 5 well-kept instances would do the same job. KTHX!