The trumpeted arrival of containerized workloads at production’s gates has enterprise IT shops bracing for the new landscape of container OSes.
Containers alleviate one headache for systems administrators. Virtual machines that encapsulate the whole OS force IT shops to run many different OS versions, which can spawn dependencies and configuration drift. One container OS underpins all of the containers on a given host, which simplifies OS updates and reduces the resource overhead per server.
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
Sys admins should scrutinize the size of enterprise OSes as they adopt containers, advised Thomas Bittman, vice president at Gartner, who spoke at the analyst firm’s IT Operations Strategies and Solutions conference in Orlando in May. Containerization could force OS administrators to support more and newer OSes in production, at least for now. Windows Server typically requires 10 gigabytes to operate, while Red Hat Enterprise Linux needs nearly half that space at about 6 GB — but that’s still a heavy OS originally designed for monolithic applications, not lightweight containers that encapsulate small functions of an application. As containerized workloads enter the production environment alongside legacy apps, IT admins will be pressured to adopt micro OSes such as Alpine Linux, CoreOS Linux, RancherOS and Red Hat’s Atomic Host, Bittman said. For Windows administrators, Server Core is a stripped-down offering of Windows Server at 5 GB, but that’s still big compared to micro and targeted container OSes. RancherOS, by contrast, occupies 150 megabytes of space. Nano Server, available in Windows Server 2016 and Microsoft Azure cloud deployments, is a container OS that, at around 500 MB, finally makes Microsoft a worthy contender– but organizations have been slow to embrace it.
“Windows administrators are used to a UI [user interface] on their servers … they’re not familiar with PowerShell and the command line,” said Thomas Maurer, a Microsoft MVP for Hyper-V and a cloud architect for Swiss-based itnetX. Admins worry that if something goes wrong in this smaller OS they won’t be able to fix it. Microsoft also forces a rapid pace of change on Nano Server via the Current Branch for Business cadence for Software Assurance — fall more than two versions behind and you’ll lose support.
Microsoft has hit some bumps as it integrates container operations into Windows Server. It abandoned a homegrown container management engine project in favor of the Docker Engine and Docker client wrapped with a set of PowerShell cmdlets. This PowerShell module is still in infancy and it is unclear how extensively Microsoft will develop it, Maurer said.
Microsoft loves the idea of containers because it takes the focus from the hypervisor and puts it back on the operating system, Bittman said. But with Nano Server only released late last year, he wondered why it took the company so long to invest in container OS technology.
Microsoft was late to the game, Maurer agreed, and needed time to build a base that enables containers, including the .NET Core framework and the Nano Server minimal deployment option, which is a significant reduction in size from Server Core.
The company is making strides now. Hyper-V containers provide an additional layer of isolation to essentially give each container its own kernel, which Maurer describes as a creation somewhere between a VM and a container. By the end of 2017, Microsoft will bring the Windows Subsystem for Linux onto Windows Server, using this Hyper-V container isolation to run Linux and Windows containers on a Windows Server host. Host-independent containerization should be ready as early as this summer through Microsoft’s Insider early access program.
“That’s really cool — that’s where things are going,” Maurer said. “If you have a mixed environment, instead of having Linux everywhere you just install Windows and you can run those Linux containers on Windows and you don’t have to think about it anymore. … It wouldn’t matter on which host you deployed a Linux container or a Windows container, you can run it on all of the hosts.”
For those just starting with containers, the challenge isn’t the technology itself, it’s a change in operations, Maurer said. A sys admin can create and maintain a VM for months — or years — but with containers you declare the configuration and allow the container to destroy and restart, and redeploy container images with each update.
Finally, a word to the wise: While containers seemingly replace one OS challenge with another, what they really do is layer on. Most container workloads that will enter production for the foreseeable future will do so on host VMs, which have known management and security processes for production workloads.
Meredith Courtemanche is a senior site editor in TechTarget’s Data Center and Virtualization group, with sites including SearchITOperations, SearchWindowsServer and SearchExchange. Find her work @DataCenterTT or email her at email@example.com.
Author’s note: If you take issue with the exclusive focus on people named Tom interviewed for this blog, please contact SearchWindowsServer editor Tom Walat.