Virtualization is the hot topic of the day. Be it application virtualization, OS virtualization or presentation...
virtualization; if you can virtualize it, someone's probably slapped that label on it.
The thing is, all of these technologies have been around for some time, even years in some cases. Multi-user Windows has existed in various forms since 1992 and became a core part of the Windows operating system with Windows 2000. VMware Inc. has been evangelizing virtualized servers and clients since the company's inception, and SoftGrid was talking up application isolation and streaming long before Microsoft purchased the company in 2006. People were buying it, too.
So why is virtualization a hot topic now, instead of two years ago?
There are several possible reasons. The virtualization features have improved with every release and therefore have become more like working on a non-virtualized computer. Increased interest in environmentally friendly computing solutions has fostered interest in remote access and server consolidation.
Still, perhaps the most important reason why virtualization has become such a hot topic is that the infrastructure now exists to support it and make it scale while ensuring a rich experience. Reliable high-speed LANs and WANs are part of that infrastructure, as is 64-bit Windows.
In fact, 64-bit Windows is a key part of virtualization because of the one major virtualization bottleneck -- memory. Let's take a look at the relationship between physical memory (the DIMMs you install in your computer) and virtual memory (the place where the operating system stores data and applications in use).
In a 32-bit system, Windows can address up to 4 GB of virtual memory. Two gigabytes of virtual memory are shared among kernel-mode processes that support core functions of the operating system, and 2 GB are allocated individually to each user-mode process and isolated from all other user-mode processes. The number of virtual memory addresses available to user-mode processes may appear enormous because each process sees the entire 2 GB area for its exclusive use.
But in order for virtual memory to be useful, the memory manager must be able to map the virtual address to a physical location so that when the data is needed, the operating system knows how to go get it. Windows does this through a system of pages that store data, page tables that index the pages and a record of page table entries. Combined, these all document how a virtual memory address maps to a physical location.
The 32-bit operating system's method of mapping virtual addresses to physical ones works for up to 4 GB of physical memory under normal circumstances, since the addresses are 32 bits long. The rest of the virtual memory addresses must be backed by an area of hard disk called the page file, which provides alternate storage but is slower than RAM.
Note: For more details on the differences between 32-bit and 64-bit memory, check out this article: Give your system memory a boost with x64.
The issue here is that on a virtualized system, there's going to be a lot of user-mode processes. A single computer may support half a dozen or so users for virtualizing desktops using technology like Microsoft's Hyper-V or VMware's ESX Server. And it may support dozens or hundreds of users for virtualizing applications using Terminal Services. Every user will have his or her own set of applications, and all those applications were originally designed to run on a single-use computer. Virtualization platforms are designed to be as parsimonious as possible with memory, but at the end of the day they're bound by the demands of the applications.
Virtualized PCs have an even greater problem than terminal server sessions. The entire operating system must be virtualized to support each connection. Another issue is that virtualization becomes the victim of its own success. If the virtualized experience is limited, then people won't like it. But if it's got most of the same features of a non-virtualized platform, then supporting that takes resources. For example, the new support for monitor spanning in Windows Server 2008 Terminal Services requires more memory than a single monitor because the viewing space is larger.
Therefore, you need an efficient virtualization platform with enough memory to back it properly. Although terminal servers have used 32-bit operating systems for smaller deployments, 64-bit platforms -- combined with adequate processor support and a disk topology designed to reduce I/O bottlenecks -- will be necessary to support larger deployments. And that's just as true for virtualized operating systems attempting desktop replacement. For this reason, Microsoft's Hyper-V is available only on 64-bit operating systems, although you can install 32-bit operating systems as guests on Hyper-V.
There are some catches to 64-bit operating systems too, of course. For one, 64-bit processes use more memory than their 32-bit counterparts, so you'll need to run enough processes to require more than 4 GB of memory before it's worth it. In addition, 64-bit operating systems need 64-bit drivers, which can be harder to find. Still, although they require more planning to implement, 64-bit operating systems are the future, especially since they are more or less required to support the virtualization that people are looking for.
ABOUT THE AUTHOR
Christa Anderson is a program manager on the Terminal Services team at Microsoft and author of the forthcoming Windows Terminal Services Resource Kit from Microsoft Press. She is an internationally known authority on scripting, the author of Windows Terminal Services, The Definitive Guide to MetaFrame XP, and co-author of the book Mastering Windows 2003 Server. If you have a scripting question for Christa, please email her at editor@SearchWincomputing.com. She often uses these emails as fodder for her scripting columns.