BACKGROUND IMAGE: iSTOCK/GETTY IMAGES
How containers work
A container is a logical environment created on a computer where an application can run. The container and its guest application are abstracted from the underlying host computer's hardware resources -- processors, memory, storage and so on -- and are logically isolated from other containers.
Containers bear similarities to virtual machines (VMs), but unlike VMs, containers do not rely on a separate hypervisor layer. Instead, containers are supported by the underlying operating system, which is traditionally Linux, but they are increasingly supported by non-Linux operating systems, such as Windows Server 2016 and macOS.
Containers share the same OS kernel, which eliminates the need to install a separate OS instance for each container. However, security or functional problems with the OS can potentially affect all the containers that run on the OS.
The container host provides each container with a virtualized namespace that grants access only to the resources the container should see. This restricted access makes the container behave as if it is the only application running on the system. The host also controls how much of its resources can be used by individual containers. For example, the container host system can limit CPU usage to a certain percentage that applications cannot exceed.
A container image is a file that represents an amalgam of application, data, operating system libraries and all other dependencies needed to operate the application. The container image can be stored in either a local, public or private container repository.
When the container image is deployed to a host system with a compatible container platform -- such as Docker -- the application will run without the need to install or update any other components on the host. This compact, self-sufficient behavior enables containers to be extremely portable. Developers can create and test applications, and then deploy the applications to production, without the need for developers or IT operations staff to make any changes to the image.
Containers can also be interconnected to create larger, more scalable applications using a microservices architecture. For example, a traditional monolithic application may be rearchitected into functions, where each function is deployed in a different container and enabled to communicate across containers using APIs.
Windows container types
Windows Server 2016 supports two different container types: Windows Server containers and Hyper-V containers.
Windows Server containers rely on the Windows Server kernel. The Windows Server kernel uses process and namespace isolation to create the distinct space for each container, and all containers that run on the host system share the kernel. Since Windows Server containers do not provide strong security, it is best to avoid using untested and untrusted containers in this mode.
In Hyper-V containers, Hyper-V isolation first creates a traditional VM, and then one or more containers can be deployed atop the operating system installed in the VM. The VM can be a different OS than the one used on the host or management VM. This approach provides more integrity by ensuring security between the VM and the other VMs, containers and the rest of the system. Organizations can use Hyper-V isolation in multi-tenant environments or in scenarios where containers are untested or untrustworthy.
Windows containers can support established Windows technologies, including .NET and ASP.NET. Admins can create and manage containers with PowerShell or Docker, but containers created with one tool can't be managed with the other. Organizations can also create and manage Windows containers in the Microsoft Azure public cloud platform.
Windows container requirements
Windows containers require at least one local system that runs Windows Server 2016. Organizations can deploy containers on Microsoft Azure to enable migrations between local data centers and Azure cloud resources. However, both resources are not needed simultaneously.
Next, administrators need to install the Windows Server 2016 version of Docker -- both Docker Engine and the client. Installation typically starts by acquiring the OneGet provider PowerShell module and the MicrosoftDockerProvider package with Docker. The providers enable container functionality on the Windows Server system. Once Docker is installed, the system will reboot, and the PackageManagement PowerShell module can help install the latest version of Docker.
Next, install any updates for Windows Server 2016. Administrators new to containers might want to use a host server in a lab environment so updates don't disrupt the production environment. Finally, administrators can create or download a container image and then deploy that container image with the docker run command.
Windows container deployment
Administrators can deploy single containers, but this can limit resilience if a problem occurs with the container that makes the application unavailable. Organizations can enhance availability in containerized applications by deploying containers in clusters through Docker Swarm. If one instance of the container fails, other instances of that container on other systems can continue to operate.