Windows Server 2016 release broadens reach across IT spectrum
A comprehensive collection of articles, videos and more, hand-picked by our editors
As the world of IT evolves from the traditional client-server model, so has the next iteration of Windows Server. Due out some time in the second half of this year, Windows Server 2016 will come equipped to reflect these changing times. Some Windows Server 2016 features include support for containers, tighter integration with Microsoft's Azure cloud service and more security for Hyper-V virtual machines.
While a move to the cloud and microservices may not be on the immediate roadmap for most organizations, Microsoft is laying the groundwork for what it considers to be an eventuality for companies that want to reduce capital expenditures by using Microsoft's infrastructure to handle their workloads.
John Joyner works for managed services provider ClearPointe as the company's director of product development. Because ClearPointe's monitoring stack is based on Microsoft Azure and Systems Center, Joyner has been working with the technical previews of Windows Server 2016 to see what the future holds for Windows Server administrators. In this podcast, Joyner talks about some of the more intriguing Windows Server 2016 features.
(This transcript has been edited for clarity and brevity.)
SearchWindowsServer: What are the top innovations that you've seen in Windows Server 2016?
Joyner: I'm going to lead with Storage Spaces Direct [which] is the software-defined storage technology, the latest evolution of it. Microsoft introduced Storage Spaces with Window Server 2012, and the ability to transcend the need for a SAN -- a storage area network -- and dedicated storage networking hardware. Basically, Storage Spaces moved the SAN controller function into Windows software and allowed you to use inexpensive disk enclosures, populated by what we called JBOD or disks that are not in a sophisticated fault-tolerant array, but are rather presented to the operating system as individual discs. And in the array-type technology, the [disk] striping, and the fault tolerance, that's done by Windows now rather than relying on an external appliance.
Microsoft identified storage and SANs and associated SAN infrastructure as being the most costly single component in data center [capital expenditure]. Microsoft research [says] approximately 60% of a new data center cost is directly attributed to storage and storage area networking. Microsoft released Storage Spaces as a way out of this cost issue by eliminating the need for the separate SAN infrastructure. Storage Spaces leverages the less expensive passive disc enclosures, the less expensive JBOD discs running on essentially less expensive commodity hardware rather than the more pricey proprietary hardware; and thereby giving you high performance at significantly reduced cost. Through converge networking which is running your storage and your data over the same 10 [Gigabit Ethernet] or 40 [Gigabit Ethernet] pipe, again that's how we eliminate the need for the separate Fibre Channel or the iSCSI networks.
The difference between Storage Spaces in Windows Server 2012 and Storage Spaces Direct in Server 2016 is that in 2012, each enclosure still required its own redundant serial attached SCSI connectors to each scaled out file server node. There was a bit of a scaling limitation because you could only fit so many dual channel SAS [host bus adapters] in a server. That limited how many enclosures you could connect to. Microsoft has eliminated that bottleneck or that scaling constraint by not requiring a direct SAS connection anymore.
Are four hosts required to use Storage Spaces Direct?
I don't have that specific information about four hosts. I haven't heard that specifically. It may be necessary to deploy a certain number of hosts to have the highest level of redundancy possible. For example, you can deploy Storage Spaces with the enclosures. You have to have three enclosures to have enclosure awareness. Two nodes aren't enough of a vote to make a decision. You have to have three and so that's a limitation. You can run Storage Spaces on one or two enclosures. But to have the ability to lose an entire enclosure and keep running, you need three. In terms of how many nodes and enclosures you need, there probably is an optimum spot that provides the highest level or redundancy. But there's probably ways to deploy it with fewer components if you can accept slightly reduced redundancy.
The next strategic thing is what's called the shielded VM and the guarded fabric feature. This is a solution Microsoft has come up with, a very high-security virtualization solution for the hosting and the financial services communities. There's a product that Microsoft competitor VMware has; it's part of their vSphere Suite that's often called the host-based firewall. VMware wraps a protective layer around each virtual machine to prevent a hostile virtual machine from overtaking the host. This is a common feature that's implemented in the PCI, the payment card industry. Your credit card and financial and banking industry frequently has implemented this solution from VMware as providing the highest level of workload isolation in a data center for the extremely, extremely sensitive financial transactions.
Microsoft has developed an alternative technology [that assumes] the worst case that the host is already compromised. How can we protect the guests from a hostile host? We can't protect the guest from being shut down, right? We can't prevent maybe denial of service and mischief. But we can prevent data loss and data theft. The shielded VM solution leverages the Microsoft's proven BitLocker technology with the Trusted Platform Module (TPM) in the Hyper-V host, and the guarded fabric solution which virtualizes those TPMs. When you create a shielded VM, the VM is encrypted at rest and in production with the virtual trusted computing module assigned to that virtual machine.
This allows the highly security-conscious [hosting company] or enterprise customer to deploy an absolutely forward-looking, leading-edge, high-security solution at far less cost than using competing technology. To use the VMware solution … you have to use VMware, and you have to use a storage technology supported by VMware, such as Fibre Channel or iSCSI. With the Microsoft solution, we're using Storage Spaces. We don't need the expensive SAN architecture. And we also don't need the third-party product.
Does encrypting the virtual machine have any impact on performance?
It's a small impact. For example, reading a compressed file or an encrypted file includes some overhead. I don't have exact numbers but it's usually in the 5% to 15% category. It's not really significant for planning. It's not a significant slowdown at all.
What were some of the other Windows Server 2016 features worth noting?
The adoption of containers and new technologies that depend on containers. Containers come from the open source world, particularly the product known as Docker. In a Docker configuration, you write a configuration, you author a configuration document that describes your service or application in terms of its resource requirements. So much storage, so much networking, so much compute. This definition of your service has been deployed to a container. And then appropriate resources, compute, storage, and networking are assembled and delivered just in time to create that resource. This is a radically different way of deploying services, that we're not standing up a server and then deploying an app in a server. We're deploying an entire service all at once. The networking, the firewall, the security, the apps, the database, all deployed as a single configuration.
When we write our application to a Docker or a container-type specification, we can then move the entire application, all of its parts to any other Docker compliant cloud. The concept is that you can write an application once, and then move it to another private cloud or to a hosted cloud or a partner cloud, whatever makes the most sense for you economically, feature-wise in your market and industry. Using containers evolves us away from the service-based way of deploying services to a more holistic method.
There are two specific sub-technologies that complement containers. One is called Nano Server. Nano Server is Microsoft's concept to provide on-demand compute that is highly elastic and very inexpensive to manage. Nano Servers are highly stripped down compute kernels of Window server. Think of them as Window Server Core, but even to the next level of stripping down in function. Because a Nano Server is not made to be treated like a pet and given a lot of love and attention over many years of its lifecycle. A Nano Server is meant to be treated more like cattle. It's meant to be herded and handled by quantity and with no special attachment to any one particular instance of Nano Server. They are ephemeral compute resources.
A third and final thing I'll throw out there is DSC, Desired State Configuration, which is another PowerShell-derived technology from Jeffrey Snover, the lead architect for Windows. Jeffrey's vision is that the Desired State Configuration service, which is a Web service, enables automatic configuration of servers that -- either through push or pull technology -- are aware of the DSC server. So you would define your application through DSC, and then when you deployed Nano Servers to fulfill that application, those Nano Servers would wake up and pull their configuration details from DSC and say, 'Oh, I'm going to be a web server. Oh, I'm going to be a file server; here are my file shares' All of these are contained in the DSC document, the reference, the Desired State Configuration reference. Tthen we just point that DSC definition to a container.
Through this technology, Microsoft is addressing another major cost of the data center, and that's design and deployment. We're significantly reducing the storage cost by going to Storage Spaces Direct. We're also significantly reducing or even eliminating the design and deployment costs, because when we have an application that is authored to run in the container and uses Nano Servers and Desired State Configuration, there's no mental challenge to performing the deployment. All of the mental work is done in creating that definition.
What are the differences with Windows containers and Hyper-V containers?
There's not much difference. The standard is Docker. It's an open source standard that defines how to deploy services to containers. Microsoft's goal is to make things not proprietary, and make them open and empower the customer to decide where they want to run their workloads. If they can run them most effectively and cost efficiently on premises, or in a partner cloud, or in the Microsoft cloud, they want to empower you to do that and not be constrained technically. It made them all run technically the same. You can decide where to host it based on other business factors like cost, proximity, etc.
How different is management in Windows Server 2016?
I think the most marked change or development for administrators from a management point of view is that a lot of stuff is now moving to Azure. Azure is becoming the destination for the new services. Systems Center components may be enabled. They may be adapted. They may be superseded by the cloud features. But more and more activities and services are requiring investment and familiarity and use of Azure tools. This is a good thing because Microsoft is not throwing things willy-nilly into the cloud. They are onboarding services based on what they call user voice, which is their actual metric-based feedback from customers around the world on what enterprises need first in a priority.
As those are addressed, Azure is releasing the answers to those business [requests]. Every few months, a significant new service [or] significant new management service is becoming available in Azure. An example would be Azure Automation, which is the ability to take remediation steps based on scripting from an Azure alert. Using the Microsoft [Operations Management Suite] product, we can monitor the performance of a server just like we can with Operations Manager.
In Operations Manager, if we want to perform a task based on an alert; for example, an alert says a router is malfunctioning. We may have an automatic test to restart that router before we escalate any further tickets. In Operations Manager, that sort of remediation can be built into a management pack. It can be enabled as a task in the System Center Operations Manager console. In the new Azure-based world, there are now Azure automation hooks that can allow an Azure alert to kick off an Azure Automation for remediation. And so, instead of being confined to your on-premises tools, you now have a globally available, high-performing tool that you can write your automation to that isn't dependent on a particular server or business location. From the administrator point of view, more and more is rolling up to the cloud and that's probably the biggest news on the management side.
Windows Server 2016 may present backup challenges
Should you upgrade to Windows Server 2016?
What's new with security in Windows Server 2016?