Get started Bring yourself up to speed with our introductory content.

Windows Server 2016 features focus on evolving IT landscape

Listen to this podcast

With numerous updates and innovations to storage, security and virtualization, Microsoft attempts to lock in its place in today's developer-friendly world.

With the official release of Windows Server 2016 just a few months away, Microsoft released its final technical preview in April to concentrate on polishing the features of its next major server operating system. It's been a long time coming when you consider the first technical preview came out in October 2014.

Microsoft has added -- or upgraded -- numerous Windows Server 2016 features, reworking its flagship product to fit in a world where developers can spin up web-based startups seemingly overnight -- think Instagram, Spotify and Uber. Microsoft has partnered with Docker, maker of the popular application container engine, and built a minimal version of the full Windows Server 2016 called Nano Server to speed up deployments in today's web-scale environments.

Those are just a few of the new Windows Server 2016 features expected to come to fruition in the third quarter of 2016. Microsoft has also enhanced its software-defined storage capabilities with Storage Spaces Direct and reinforced security in Hyper-V with shielded VMs and Host Guardian Service.

To talk about some of these new features, SearchWindowsServer spoke with Thomas Maurer, a Microsoft MVP for Hyper-V and a cloud architect for Swiss-based itnetX, an IT consulting company and Microsoft partner. We asked Maurer about some of the changes with storage and virtualization in Windows Server 2016 -- and the ways administrators can prepare themselves for a future that has tilted toward the cloud and containers.

(This transcript has been edited for clarity and brevity.)

Transcript - Windows Server 2016 features focus on evolving IT landscape

Microsoft recently released Windows Server 2016 Technical Preview 5 (TP5). What's new with Hyper-V in Windows Server 2016?

Thomas Maurer: Microsoft [made] a huge investment in this release again in Hyper-V and all the side features related to Hyper-V such as in storage and networking. There are a lot of new features, but if I had to highlight one or two, I would definitely say shielded VMs. This is something really, really new to the hypervisor world. So, in the past, we as virtualization admins were always protecting our Hyper-V hosts. Right?

And this was good, and we were protecting ourselves from the bad guys inside the VM. But now, who protects the data inside the VM from the administrators? If you're a Hyper-V administrator -- or VMware administrator or storage administrator -- you have access to the virtual machine's data. This is a problem if you think about service providers and also if you want to move a server to the cloud; your data is in there and you don't want the virtualization administrator of the service provider to access your data.

That's something Microsoft focused on in this release. They introduced a concept called shielded VMs. It consists of different security technology. For example, one is virtual secure mode which protects the memory and the processes inside the virtual machine from being accessed by an administrator.

Another thing Microsoft introduced is the Host Guardian Service. The Host Guardian Service makes sure that the virtual machine only starts in trusted hardware. So, for example, if an administrator copies the virtual machine, moves it to his private notebook or his private lab, and tries to start it up, it won't start.

And then the last one, which is probably one of the most important ones, is the shielded VM part itself. So, there will be a possibility for a virtual Trusted Platform Module chip inside the virtual machine to protect the virtual machine using BitLocker and encrypt the whole storage. And I think this is going to be really interesting, the part here, especially when you think about clouds and moving VMs to the cloud.

Do you think these Hyper-V security features will encourage people to start thinking more seriously about moving into the cloud?

Maurer: Yeah, absolutely. I work in Switzerland, and I work directly with a lot of customers in the finance sector and they have concerns about their data. They have to make sure that their data is protected. Especially in that sector, people are really thinking about, 'OK, we can use that feature and give our virtual machines basically to service providers, or probably Azure, or other cloud providers as well.' I think that it's going to boost it a little bit. Of course, there's still some legal concerns and things going on, but in terms of technology, this will really help.

How close is Hyper-V getting to ESXi in terms of adoption? I've been hearing people say that the Hyper-V hypervisor has improved quite a bit, but the management is still somewhat difficult compared to what you get in vSphere. Is that accurate?

Maurer: As a Microsoft MVP, I wouldn't say that it's more difficult. I will say it's different. But I agree. If you're going to VMware compete projects where we compare Hyper-V against VMware, it always comes [down] a little bit … to the management stack. On the hypervisor level in terms of performance, both have advantages and disadvantages, but they're basically on the same level. To be honest, in my case, it doesn't really matter … which hypervisor you use. It's not about only the hypervisor. It's a little bit more about the ecosystem as a whole. So, for management, it's kind of different, right?

Microsoft is really going strong in cloud management and thinking really big using System Center, Virtual Machine Manager, Azure and a lot of PowerShell to do some automation tasks. If you're in the VMware space, you [have] vCenter where it's very simple to manage, and it does a great job there. In the Microsoft world, you have to rethink that a little bit.

Customers from enterprise companies ask me, 'OK. Now, there's containers. What do I deploy in there?' It's still hard to see what people are going to do. There's not much advice right now from Microsoft to say what containers are designed for. They are giving us a tool and functions and new features -- they're curious about what people are going to do with that.
Thomas Maurercloud architect, itnetX

For example, Virtual Machine Manager is not just a replacement for vCenter [but] it brings a lot more to the table. It brings also storage management, network management and things like that. The name Virtual Machine Manager should be probably 'Fabric Manager' or something like that because it does so much more; it's basically the fabric management tool. So, it builds that abstraction layer from your physical hardware to the virtual world. You can create resource pools called clouds and manage your tenants and things like that.

I totally agree. It's hard to say if you just want to have the same thing when you move from VMware to Hyper-V, it's probably not a good thing. It's not designed that way. You have to look at it as a whole in terms of management, and … you want to rethink your storage infrastructure and your network infrastructure as well, because there you can get a lot of advantages when you do that.

Software-defined storage is something that's pretty hot right now, and Microsoft is introducing Storage Spaces Direct in Windows Server 2016. Can you go into detail about what Storage Spaces Direct is and what that means to administrators?

Maurer: Microsoft is doing great things in storage in their next release such as Storage Replica and the other one that you mentioned, Storage Spaces Direct. So, Storage Spaces Direct is kind of like a new version of Storage Spaces. There will be still that, let's call it the classic version of Storage Spaces, where you have a server with a shared JBOD attached by SAS cables to two servers and then create a cluster. So, [with] Storage Spaces Direct, Microsoft did a little bit of rethinking here. So, what Storage Spaces Directis, again, it's a cluster. But there are no shared disks using a SAS bus or anything. So, you just use servers or standard servers with local disks.

This can be SATA or SAS drives, [solid-state drive] hard drives or even NVM Express SSD disks. You can then cluster that over several servers just using network connectivity between them. The whole data distribution goes over the network. And so, it's using SMB 3.0 to do that. And it's recommended that you have network cards using RDMA [Remote Direct Memory Access] for SMB Direct and things like that, and at least a 10 GbE network or even more. We are testing in our labs using 100 GbE networking using RDMA with some SSDs and saw some really good performance.

We could see when we disabled RDMA, we lost, like, 50% of performance. So, people are really going to make sure to rethink their storage; they probably want to have these RDMA network cards which allow you to get higher throughput and less latency.

Will people be able to use their existing hardware with just maybe a network upgrade to implement Storage Spaces Direct?

Maurer: I'm not too sure about that. You still should be using supported hardware with a Windows Server logo on it. I'm not quite sure yet if there will be something special for Storage Spaces Direct where you can only use certified Storage Spaces Direct hardware. But basically, yes. You can use a standard server, put in some fast network adapters and you attach local disks. That's basically the whole concept. If it will be supported, just do it yourself with any hardware you want. We'll see about that. But Microsoft will definitely give out some guidance. …

Another cool thing is you're not only creating a Scale-Out File Server cluster, you also have now the supported solution for hyper-converged clusters. So, your storage nodes can also run the virtual machine at the same time. You don't have a build a file server cluster and a Hyper-V cluster; you just build one cluster where you have both roles enabled and you can use that. This is a similar concept to Nutanix and all the hyper-converged systems out there. You only have to pay for a Windows Server Datacenter license and, of course, the hardware.

Do you think Storage Spaces Direct will be one of the features users will want to try first? Or do you think there's going to be a tendency to hold off?

Maurer: I think there will be several customers who will go for that. And I think it will be one of the features in Windows Server which will be the most tested feature in this release because it's so new. Since Technical Preview 3 or even a little bit earlier, we could start to test that feature. And because we started doing a lot of testing … there are a lot of customers out there already doing that with the technical previews.

To implement this feature is probably more about where you are in the storage lifecycle. If I still have, for example, a SAN in place or just bought one a year ago, I'm not just going to replace that because Storage Spaces Direct is here. … But for all those who have to buy new storage or have to evaluate new storage, Storage Spaces Direct would be definitely considered.

There are some cool features in there. For example, Storage QoS, which will be a feature [that] allows you to do storage [quality of service] on, like, virtual machine group levels and things like that. This technology goes through the whole stack. It's based on Hyper-V, but it also has to use the Scale-Out File Server and Storage Spaces. You only get that cool feature if you use the whole stack, right? And so, for people who are switching from VMware -- or whoever is already using VMware and have to go for a new storage solution -- I think they definitely should consider Storage Spaces Direct or have, at least, a look at it.

Do you think there's going to be any difficulties with making backups with Storage Spaces Direct?

Maurer: Well, it's going to be a little bit different, but it's not going to be more difficult. … I think there's a lot of improvement response with Hyper-V and that's just a lot of, for example, change tracking which now is introduced in Hyper-V itself for backup. Microsoft makes it easier for backup vendors to integrate with their backup solutions. So, backups of virtual machines running on Scale-Out File Server -- or Storage Spaces or Storage Spaces Direct -- I think that will be an easy thing to do. [There] will not be … a lot of challenges here.

The cool thing here is you can also use Storage Spaces Direct as a backup target. You can, for example, use a Storage Spaces Direct cluster with very fast disks, with SSDs and things like that, for running your virtual machines. But you can also have a very cheap solution … with a lot of storage with a little bit slower disks as your backup target. It's not just designed for virtual machines. You can also use Storage Spaces Direct for your really cold data.

Storage Spaces Direct sounds similar to what VMware introduced with the VMware VSAN feature that pools storage. There was a limit to the number of nodes that you could attach to pool the storage. Do you know what the technical specifications are on Storage Spaces Direct?

Maurer: I have heard several different numbers. But again, in that case, Microsoft didn't really announce something yet. So, what we know first, for example, for Technical Preview 5 now, we have to have at least four nodes to get that started. You can also build it if you have a test environment with [fewer] nodes, but you should have four nodes to go in TP5. And about the numbers and scale, like the maximum numbers, this is going to be really interesting because it's being tested right now. All the numbers Microsoft is telling you -- for example, how much memory can a virtual machine have, how many virtual machines can I run on a cluster -- all those numbers are tested numbers.

Microsoft is now starting to test those numbers and testing the scale of the systems and also optimizing the scale. We will see when Windows Server 2016 hits what will be the real numbers. They were also asking … people about how many nodes do you wish to have in there. But there's no official communication right now.

That makes sense that they're not going to issue anything official because they may be continuing to make improvements so those numbers could change.

Maurer: Exactly.

What are the similarities between Nano Server and Server Core? And where are they different? I know the Nano Server's quite a bit smaller than Server Core, but how else are they different? Microsoft tried this minimal footprint before, but it didn't seem to be widely adopted.

Maurer: Microsoft started Server Core with Windows Server 2008. It was just Windows Server and they removed the UI. But it wasn't a real fundamental change. It still had all the applications in there. It still had all the roles in there. You could activate them. But just the UI was gone. And that was OK. But it had some other issues as well. So, management, for example. With 2008 or 2008 R2, there was PowerShell; but to be honest, there weren't really good PowerShell modules available. So, there was no module for clustering. There was no module for Hyper-V. So, you couldn't really use PowerShell to manage Hyper-V. This was also definitely a problem.

But with Nano Server, this is changing. So, we now have the tools. We have PowerShell that can really do some serious management and can basically do everything using PowerShell. Server Core just removed the UI and Nano Server is completely refactored. There is a base image -- they are following that zero footprint model.

So, all the server roles and features are not included in the basic Nano Server image. You have to add that to your image before you deploy it or when the server is running. For that, you can use PowerShell package management. But you have to download it and add it to the server. That's how they keep the footprint very, very small on that system.

Do you think this is going to be an issue for administrators who prefer to use a GUI to manage their servers?

Maurer: This is really going to be a big change for a lot of people out there. So, for all us Windows administrators who used Windows Server now for years, this is completely different. Nano Server, in the short term right now, won't replace Windows Server or the full server for everyone. There are still some features missing and also, for probably smaller environments, you don't have the need for Nano Server, and you probably prefer the full server, or what they call the Windows Server Desktop Experience. This is definitely going to be a total rethinking, right?

When you look at it today, an administrator goes to a server; he installs Windows Server, goes to Server Manager and adds the roles and features he wants. In the future with Nano Server, you take your PowerShell module. You create a new image. You say, 'OK, I want to have an image including Hyper-V and the failover clustering.' And then probably add some drivers and things like that. You create that image. You take that image and then you're going to deploy that to the server. The administrator builds it up instead of adding roles and features afterwards. So, it's going to be a rethinking of deployment, but as I said, also about management. It's a lot about remote management. No more Remote Desktop Protocol, if you will, to connect to your servers. To do that, you'll have to use the remote management tools you get today.

Is this something in your experience that you've been working with? Is this a very solid release and something that you would encourage people to use right away?

Maurer: Nano Server's pretty new, so Microsoft's invested a lot of time testing Nano Server. So, I don't expect that there will be a lot of issues or unknown issues. But, of course … what we have to consider is how do I manage that? How does that work? And also, which applications do I run on Nano Server? So, I have to be aware that, for example, as of today, I can't create an Active Directory server on Nano Server. So, I can't install Active Directory roles. So, I have to know which roles are supported for Nano Server.

But to be honest, especially for the Hyper-V installed server part, I would definitely recommend going with Nano Server because the footprint is much smaller and it's less attack surface. I just talked to some guys at a conference, and they were telling us that they are testing Nano Server. They stretched it out. For example, they put thousands of virtual machines on the Nano Server Hyper-V host. So, I think this is going to be a solid release, especially for the Hyper-V and storage part.

There has been a lot of buzz around containers the last few years. Can you explain what they are and why they matter to Windows Server administrators?

Maurer: For containers, there's a few words that are really important and made it clear what they are to me: 'Operating system-level virtualization.' If you think about it, what we are doing today with using Hyper-V, or VMware, or other hypervisors, we are creating virtual hardware where we then install [an] operating system. … With containers, we are not creating virtual hardware. We are basically creating virtual OS containers. So, we're using the OS and virtualizing the operating system for applications or other tasks in there. And that's … the big difference here.

In terms of isolation, I also try to explain to people that containers are basically something between process and virtual machine. They're not as isolated as the virtual machines and have a lot less overhead than a virtual machine. But they're still not live in their own environment. So, every process connects to another process. They still have this isolated thing. So, they are somewhere between a process and a virtual machine. That makes them really cool for some scenarios.

If you think about it, they're very lightweight. You can deploy many more containers on the same hardware than you can deploy virtual machines because you don't have that OS overhead. They can also start up very fast. It takes just seconds -- or milliseconds -- to start a new container.

In terms of virtual machines, the operating system needs to boot. With containers, the operating system is already running. You just create a new environment where applications can run and this … takes milliseconds.

It's still hard, though, to say which applications I'm going to deploy in there. Customers from enterprise companies ask me, 'OK. Now, there are containers. What do I deploy in there?' It's still hard to see what people are going to do. There's not much advice right now from Microsoft to say what containers are designed for. They are giving us a tool and functions and new features -- they're curious about what people are going to do with that.

Of course, there are some use cases. For example, an office can use containers to test applications. Instead of having five virtual machines to test five different versions of this application, you can spin up five containers and it takes just seconds to do that and deploy the application five times. Test/dev scenarios or DevOps scenarios are really good. If you deploy applications like web servers and you have to spin up multiple web servers in a very short time, containers can help with that. It will be interesting to see what people do.

Do you see a time when a big application such as Exchange gets refactored into a container?

Maurer: I don't know. The Exchange team probably thinks containers are very cool and they can use it for something. But containers are, as of today, they are mostly designed for stateless use. Stateless applications and things like that. So, you really have to have an application which is designed to run in such environments. You can spin up workers in containers but store the data outside of the container in a file share or a database or something. That would work. But containers are not going to be the solution for everything.

There are definitely some use cases for containers and use cases for virtual machines. I think it won't be one or the other. I think they will both work together. We have to see which big applications can use the container technology.

Microsoft has announced Windows containers and Hyper-V containers. How does Nano Server factor in here?

Maurer: There are two things here. One is the container runtime. That's the engine that runs the container. That can be a Windows Server container or Hyper-V container. The difference is the Hyper-V container adds some extra layers of isolation. They are a little bit more secure for some scenarios. They're also a little bit slower than Windows Server containers.

On the other hand, you have container images, which are basically the templates. If you're going to create a new container, it's always based on a container image.

With the Windows Server 2016 release, Microsoft will offer two container images. One will be a Nano Server container image and the other one will be a Server Core image. And then from there, you create a new container from either Nano Server or Server Core. You're going to install the application you want inside that container.

For example, if you want to make an IAS web server, you're going to install the role IAS. You're going to stop that container and you create a new container image from that container. So you create a new template, basically. And the next time you want to spin up a web server, you just use that container image with the IAS [internet authentication service] already installed, and then you can deploy that multiple times. This IAS container image now is linked to the operating system image -- Server Core or Nano Server -- and all we have to change is this IAS web server role in there.

You can stack them up with different layers and add something, install it again to the library, and then probably deploy it and add something else.

There have been some interesting things coming out of Microsoft lately. There's been a shift and they're embracing open source, and Linux-based features are starting to come to the Windows ecosystem. How can administrators prepare for these technologies?

Maurer: With Linux, Microsoft is doing a lot of things with open source. Microsoft realized if people, for example, are building a new startup -- the next Netflix or Hulu -- you're probably going to use open source technology. You're probably going to use Apache or whatever and all that kind of technology out there.

Microsoft realized that not everything runs on Windows Server, so they started to deploy a lot of Linux VMs in Azure. So, a huge amount of Linux VMs work in Azure and are running Linux workflows.

Microsoft is going at open source on different levels. They also announced Bash is available on Windows 10. So, you can then enable Bash and use developer tools on Linux and run them on a Windows machine to build new applications or to deploy stuff.

If you're a server administrator, there are also some certifications for the Linux world. What is really cool is that there is a Microsoft certification called Linux on Azure. There is an Azure exam and an official Linux Foundation exam. If you take both, when you pass both exams, you're going to be Linux on Azure certified. This is probably a good starting point.

You don't have to take the Azure exam if you're not working with Azure, but you can go with Linux exams and just get the basics to do that. There are some resources, for example, on the Microsoft Channel 9 or other blogs out there where they talk a little bit about Linux for Windows Server administrators.

One of the articles we had on our site recently was building a home lab, and that seems to be something people are taking more of an interest in. It's like, 'OK, I don't have time to learn about these technologies at work, so I'm going to have to go home, and maybe to future-proof myself, I need to learn more about Hyper-V. I need to learn about Linux.'

Maurer: I have to do that, too. I can't go to a customer and tell them about new things I've never done before. If Microsoft releases something new, I'm going to download it and try it out.

Microsoft is really focusing on documentation right now. That's something new as well. There is some really good documentation where you have easy step-by-step explanations. You can go through and deploy and try to understand how things connect. You also see some of the limitations and some of the advantages. But you have to invest some time. There are several things to learn, and Linux is probably one of them. I also encourage users to learn PowerShell. It is going to be really important in the future.

When I do container demos, for example, or Windows Server demos, they're all based on PowerShell. It's not because I want to make the demos look fancy or something. It's just because there's no other way to do it.

+ Show Transcript

Next Steps

Experts weigh in on Microsoft's moves with Windows Server 2016

What are the security changes coming in Windows Server 2016?

The challenge associated with Active Directory functional levels

Join the conversation

1 comment

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

Do the new Windows Server 2016 features have you interested in changing your upgrade cycle?
Cancel

-ADS BY GOOGLE

SearchServerVirtualization

SearchCloudComputing

SearchExchange

SearchSQLServer

SearchWinIT

SearchEnterpriseDesktop

SearchVirtualDesktop

Close