A wish list for the future of System Center Virtual Machine Manager

With its new focus on the private cloud, Microsoft could be on the right track with SCVMM. Here are four more capabilities that will hopefully be included with future versions.

As a Microsoft MVP, I see plenty of nifty technology buried deep inside Microsoft’s labs. While bomb-proof NDAs

prevent us from sharing most of what we’ve seen, the privilege of just being allowed in the door is one I wouldn’t trade for almost anything.

That exposure to what’s upcoming, however, sometimes limits what we can talk about publicly. Once we’ve seen something, we can’t talk about that thing until we’re given permission. It’s a fair trade.

That said, my MVP focus doesn’t lie with the team that manages System Center Virtual Machine Manager (SCVMM). I haven’t been given the opportunity to see what’s on the horizon, so you and I know about the same in regards to what’s next. And sometimes that can be a good thing, such as when you’re writing an article about the things in a product you really want to see.

Virtual Machine Manager meets the private cloud
I’ve been working with SCVMM since its initial release, watching it evolve in near-lockstep with the Hyper-V virtualization platform. Yet, while that evolution has introduced some useful features, I find it still lacks important capabilities that I hope Microsoft will incorporate in the near future.

But what SCVMM needs to be right now and what it can be as the years go by represent two very different products. Based on trends in virtualization that are only recently becoming evident, the SCVMM of a few years from now will look quite different than the one we administer today. What’s impacting it the most stems from our industry’s second major evolution toward virtual systems management: private clouds.

If you’re still confused about private clouds, don’t worry -- so are most people. That’s because today’s notion of a private cloud doesn’t necessarily surround a product that you can see, feel and interact with. Rather, the private cloud represents an entirely new approach to how you’ll manage your virtual resources.

I’ll give you my definition: While virtual machines are the vehicles that drive your business’ applications, the private cloud represents the infrastructure in which those virtual machines reside -- and that’s pretty much it.

Because of that confusion, I prefer to use the term private cloud thinking as opposed to private cloud computing when I’m out training and assisting clients with strategic planning. Private cloud thinking -- at least in my experience -- represents that second level of abstraction, shifting your mind further away from the hardware that constitutes your virtual environment.

Think of it like this -- basic server virtualization was all about abstracting physical resources like processors and RAM from the OS that uses them. With basic virtualization, ten different virtual machines could share a set of physical resources with the goal of optimizing their use.

Private cloud thinking takes this abstraction one step further. Zoom out from looking at a single VM and instead ponder your entire physical infrastructure. Correctly managed, your private cloud can abstract that entire infrastructure down to a series of integer numbers. Those numbers represent the capacity of resources your hardware provides, including memory, processing, network throughput, storage capacity and throughput, and potentially others. Such an easy-to-manage number now represents the "oomph" your hardware contributes to virtual machines.

What’s interesting is that these numbers can be calculated today. With the kinds of monitoring that’s already on each hardware component, all that’s needed is a central solution that gathers metrics and presents them in a useful way.

The future of Virtual Machine Manager
That central solution can be SCVMM v.Next, or more likely a .Next.Next to be released at some point in the medium-term future. My vision of that future solution includes four important capabilities that realize this new mindset of private cloud thinking:

  • Resource quantification, exposure and planning. The first step toward evolving VM administration starts with reconsidering how we provision resources. Moving past the notion of “how many processors” and “how much RAM” a VM needs is an important part of that evolution.

    Private cloud thinking and the SCVMM of the future should quantify and expose provisioned resources in their most elementary form. Examples include “MHz of processing” as opposed to “Number of Processors”, and so on. Doing so enables administrators to better understand the resources their hardware supplies (“I have 45,600 GHz of processing capacity”) in comparison with the resources their VMs demand (“My VMs are using 41,000 GHz of that capacity right now”).

    Exposing resources in their most elementary form simplifies performance management activities (“If my supply exceeds my demand, my VMs are getting what they need”). At the same time, it enables more quantitative planning for expansion (“At our current rate of consumption, we’ll need another blade chassis by Q3”). Administrators will like this because it provides hard data for managing VM activities. Business owners and budget holders will like it too because expansion needs can be planned by trend-lining consumption.

  • VM resource templatization. We’ve been using VM templates since right around the very first virtual machine. Because VMs live their lives as files on disk, creating a new one starts by copying and pasting it from a library. Yet while our templates have sped up the provisioning process, we’re not yet able to assign a cost to each one we build.

    One half of the supply and demand equation at the core of private cloud thinking is concerned with the resources virtual machines consume. Therefore, a second new feature in the SCVMM of tomorrow will be the assignment of cost metrics to VM templates. Having this data means knowing whether a new virtual machine request can truly be supported by the private cloud infrastructure before that VM is produced and creates a problem.

    Larger environments with thousands of VMs somewhat do this today. They can because they have experience with the resource needs of different categories of VM workloads. More VMs obviously means more data to work with, but vendors like Microsoft and your hardware vendors also have access to this data. Additionally, they can contribute intelligence to assist smaller and less-mature environments. Baking this data right into SCVMM assists the small shop in buying hardware that meets its needs. It also helps small shops maintain stability as their needs, workloads and number of VMs grow.

  • Relating metrics to money. With the first two capabilities we’ve achieved most of what private cloud thinking aims to accomplish. But a third important feature uses this same data to relate IT to the business in ways not possible before. Once IT resources are quantified into their constituent components, it becomes trivial to assign a dollar cost to each metric (“It cost us $50,000 to bring 1,000 GHz of processing online. That means each assigned GHz costs us fifty bucks”).

    Businesses who receive this kind of information can make informed decisions about the affordability of engaging in new projects, or whether to keep those we now consider legacy around (“That machine costs us $6,000 to keep around, but migrating its services somewhere else will only cost $2,000 -- let’s consolidate”).

    They can also work more closely with IT to determine if their future planning won’t be inhibited by IT (“The risk and reward on this new project is far below ITs cost model for its computing needs”). Most importantly, IT gains the quantitative data it needs to budget for and request more hardware in exact dollars and before situations grow out of control (“If we don’t add another 1,000 GHz of processing by next quarter, we’re maxed out on resources -- oh, and here’s what that’ll cost”).

  • Private/public/hybrid cloud transparent flexibility. My assertion that ITs future is in GHz and IOPS rather than processors and hard drives might seem outlandish until you realize that the largest cloud service providers are there already. When you buy into Microsoft’s Azure platform, you pay by the resources consumed. You’re not shelling out money for the number of network cards you’re using. You’re paying by the megabit (and the bandwidth). As you can see, private cloud thinking already exists in many ways today.

    There’s another future as well that further abstracts private and public clouds into a collective whole. It creates a kind of hybrid cloud, with options for processing VMs locally or somewhere else. The future’s SCVMM, once augmented with all these nice resource quantification features, should be transparently capable of relocating VMs wherever they’re needed or wherever their processing is best performed. Now that their resource needs are known and the cost for those resources is established, administrators can make educated decisions about where they should be placed (“I’m moving this VM to our public cloud provider because it needs extra resources we can’t spin up fast enough”).

    A public/private cloud hybrid also introduces the notion of dynamic, geographically-appropriate computing. Data centers today waste too many resources getting applications in the United States to run in EMEA, or services in South Africa to get good performance in Argentina. As private cloud thinking continues to evolve, the potential for a global marketplace of VM processing grows. With one in place (and with the proper security controls -- a topic for another day), it becomes possible to run any VM in whatever location it can do so with the lowest cost.

Again, the core of these items exists today and some are already being manifested in third-party solutions. Others are being developed by the cloud providers themselves. Adding the necessary components into SCVMM down the road is a task Microsoft will need to do if it wants to stay relevant in this quickly-moving facet of our IT industry.

You can follow SearchWindowsServer.com on Twitter @WindowsTT.

ABOUT THE AUTHOR:
Greg Shields is a Partner and Principal Technologist with Concentrated Technology, an IT analysis and strategic consulting firm. Contact him at http://www.ConcentratedTech.com.

This was first published in February 2011

Dig deeper on Microsoft System Center

Pro+

Features

Enjoy the benefits of Pro+ membership, learn more and join.

0 comments

Oldest 

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

SearchServerVirtualization

SearchCloudComputing

SearchExchange

SearchSQLServer

SearchWinIT

SearchEnterpriseDesktop

SearchVirtualDesktop

Close