Linux, Windows Server coexist with cross-skilled management and support
A comprehensive collection of articles, videos and more, hand-picked by our editors
Recent developments from Microsoft must have most traditional Windows systems administrators wondering how much heterogeneousness they can take on before systems -- and IT personnel -- are stretched beyond their limits. Windows Docker containers are coming, Linux virtual machines run on the Azure cloud platform, the forthcoming Nano Server has no GUI and requires PowerShell proficiency -- the client-server architecture is slowly crumbling away.
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
Where Linux and open source software were once considered a bane to the company, Microsoft has embraced new technologies that make application development and cloud computing more attractive to its customers. Demands for higher efficiency have spurred Microsoft to include support for containers in Windows Server 2016, due in the second half of 2016. Containers promise higher utilization of resources and faster application development -- but what does it mean for the average Windows systems administrator? Has the management portion of containers developed sufficiently to give IT the necessary tools to handle these workloads?
We asked the SearchWindowsServer advisory board members for their thoughts on what Windows Docker containers will mean to IT administrators who have to contend with this new way of virtualizing applications.
Containers represent a new and more efficient way to run applications. Virtual servers gained massive popularity because they allowed multiple workloads to run on a single server, but were terribly inefficient because each virtual machine (VM) had its own dedicated operating system (OS). Containers help to improve efficiency by allowing OS components -- and in some cases, application components -- to be shared by multiple containerized applications. This sharing occurs without compromising the isolation boundary that exists between containers.
Initially, the biggest challenge for Windows administrators will be the learning curve. Containers require administrators to think of applications, OSes and VMs in a new way. There are almost certainly going to be challenges associated with the transition to containers as a result of unfamiliarity with the technology. For example, administrators may have to rethink their approach to backups, troubleshooting problems and even workload scalability.
Once Windows shops get past the initial learning curve, the biggest challenge with containerization is going to be management. Although tools exist for managing Windows containers, it is going to take some time before the tools fully mature.
Some have predicted that containers will eventually replace server virtualization. I don't think the hypervisor is going to go away any time soon, but its role may change. Eventually hypervisors will probably be more widely used as container hosts than VM hosts.
For more from Brien Posey, please visit his contributor page.
Containers could mean many things for the average Windows administrator. It really depends on how Microsoft licenses everything.
Done right, containers would be a replacement for how applications are installed in Windows. Instead of scattering libraries and data about the OS, everything -- literally every single file and registry entry -- required by an application would be restricted to that application's container. Installing an application would be as simple as copying over the container and registering it. Uninstalling will be as simple as deregistering the container and deleting it. This is important because installing applications in Windows has traditionally made them almost inseparable from the OS. Developers don't follow Microsoft's guidelines, and Microsoft's guidelines change all the time. Communication between containerized applications is also done through traditional networking instead of the internal message bus of the OS. This means it can be firewalled, inspected and otherwise have security applied at an inter-application level.
Lots of people think PowerShell is all you need. I suspect it's fine for them, but the overwhelming majority of Windows admins work for small businesses and prefer the GUI. Unless and until there's a much better UI available than Microsoft has proven capable of producing in the past several years, adoption will be restricted to those who love the command line and scripts.
For now, containers are a convenient means of packaging applications for organizations that have their own in-house development teams making in-house applications. They may be useful to a handful of hyperscale operations teams, but I do not see those kinds of organizations using anything from Microsoft. It doesn't make economic sense.
What is important to remember is that containers are emphatically not a replacement for virtualization. Anyone who says so doesn't understand containers. Containers are a means of packaging applications. Hypervisors are a means of packaging OSes. There is a huge difference.
When I package an OS, I am wrapping up a complete environment. All libraries, kernel version, configuration, monitoring, network configuration and so on. OSes have a lot of moving parts. Tweak any one and you can greatly affect how an application works.
Containers can have some -- but not many -- network elements to themselves. They can have some -- but not all -- libraries relevant to an application packaged. They don't have the kernel of the OS packaged and many other environmental variables also remain set at the OS level.
If you have 10,000 applications that all need the same operating environment then by all means run 10,000 containers on the same OS. If, however, you have 10,000 applications that need eight different environments to run in, go run eight different VMs and sort your applications into containers on the relevant OS.
Despite Microsoft's best efforts, no data center is homogenous. Organizations run varied configurations even when running the same version of a given OS, and almost all organizations run multiple OSes. This means that different environments are normal and will be for the foreseeable future. Hypervisors are thus not going anywhere.
But containers allow easier packaging of applications. So the sane way to wrap this all up -- from an operations standpoint, at least -- is to deploy applications into containers that run on OSes that live in VMs. Where applications can share an OS configuration they are deployed in a container to the VM which contains that configuration. The end result is perhaps not the most efficient possible data center, but easily the most manageable compromise between theoretical perfect efficiency and real world ease of use for administrators.
For more from Trevor Pott, please visit his contributor page.
Microsoft's embrace of containers and Linux has many traditional Windows Server administrators scratching their heads, and for good reason.
For the last three years, we've heard that containers will take over data centers and clouds, while legacy technologies, such as x86 virtualization, will become the mainframe of the 21st century. That means no matter which hypervisor you choose to build your software defined data center, your world is about to turn upside down. VMware and Microsoft were quick to position their hypervisors as the ideal platform for your business' containers.
In October of 2014, Microsoft partnered with Docker to bring the most talked-about container system to the Windows platform. Then, Microsoft announced Hyper-V containers and Nano Server. Azure entered the mix with the ability to deploy Docker-managed VMs from the Azure Marketplace. Suddenly, it seemed as if Microsoft was "betting the company" on containers, and cloud-native applications were the new first-class citizens of the IT world.
Meanwhile, many enterprise customers were happily plugging along, worrying about things such as the upcoming end-of-life of Windows Server 2003, developing a cloud strategy and ignoring containers completely. And for good reason: enterprise customers with significant investments in data center facilities -- both physical and virtual infrastructure -- weren't looking for faster ways to spin up an Exchange server, for example. Containers solve a problem that most businesses don't have today.
To Microsoft's credit, it's not trying to solve today's problem. It's trying to become the preferred platform for solving tomorrow's problem, and that's laudable. And the introduction of support for solutions like Windows Docker and Nano Server does not telegraph the end of the old ways. Windows administrators will be able to use Server Manager -- or even Computer Management in the familiar Microsoft Management Console metaphor -- for the foreseeable future. In fact, it's possible that with Windows Server 2016, administrators and developers can peacefully coexist: admins can gleefully review event logs and deploy patches, while developers can command temporal legions of tiny VMs.
Containers will eventually make it to the enterprise, due in part to Microsoft's aggressive promotion of the technology as a core feature of future Windows Server releases. But for now, at least, containers are too bleeding edge for customers who are n-1 -- or n-2 or n-3 -- on OS releases.
For more from Michael Stump, please visit his contributor page.
The concept of Docker containers usually makes Windows systems administrators nervous, especially those who have no prior experience with Linux and free, open source software. However, on further inspection, containerization bears many similarities to application virtualization.
For example, many Windows systems administrators use tools such as Microsoft App-V, Citrix XenApp, or VMware ThinApp to package and distribute individual applications along with their dependencies.
Consider a business that recently upgraded all users to Windows 10, only to find out that the core line of business Web application doesn't work with the Microsoft Edge browser. Whoops. Application virtualization means administrators can distribute virtualized instances of, say, Internet Explorer 8 to users who can now use completely incompatible applications on their desktops.
Thus, with Docker containers we have the same idea: the ability to rapidly deploy isolated applications. Microsoft is taking this concept even further by including Windows Docker containers in the upcoming Windows Server 2016 OS.
Whereas Linux-based containers are used almost exclusively by application developers who need to quickly build and tear down development environments, in Windows Server 2016 we can use containers to rapidly release containerized -- that is, isolated -- Windows Server 2016 instances.
The Windows Server containers are much more lightweight and agile than full Hyper-V or VMware VMs. Consider Nano Server -- the skeletal Windows Server installation option that has an installed disk footprint of little more than 650 MB -- to see where Microsoft is headed: a future where data centers host servers with the thinnest possible OS/hypervisor layers, deploying equally micro-VMs or containers.
The Nano Server option is particularly interesting. When you stand up a physical host and install Nano Server -- including the Hyper-V Server and container packages -- you have an even thinner hypervisor layer than ESXi. In turn, this Nano Server Hyper-V host can manage thick VMs -- when business needs require that option -- and thin Windows Server containers for alternative business requirements.
In short, it seems to me that any Windows systems administrator who wants to stay relevant should already be working hard to master Windows PowerShell, Docker containerization and DevOps methodology. Technology moves fast, and it's too easy to get left behind.
For more from Tim Warner, please visit his contributor page.
Linux 101 for Windows systems administrators
Does Microsoft teaming with Docker make sense?
Officials address container security questions