Tag Archives: Containers

Bringing Device Support to Windows Server Containers

When we introduced containers to Windows with the release of Windows Server 2016, our primary goal was to support traditional server-oriented applications and workloads. As time has gone on, we’ve heard feedback from our users about how certain workloads need access to peripheral devices—a problem when you try to wrap those workloads in a container. We’re introducing support for select host device access from Windows Server containers, beginning in Insider Build 17735 (see table below).

We’ve contributed these changes back to the Open Containers Initiative (OCI) specification for Windows. We will be submitting changes to Docker to enable this functionality soon. Watch the video below for a simple example of this work in action (hint: maximize the video).

What’s Happening

To provide a simple demonstration of the workflow, we have a simple client application that listens on a COM port and reports incoming integer values (powershell console on the right). We did not have any devices on hand to speak over physical COM, so we ran the application inside of a VM and assigned the VM’s virtual COM port to the container. To mimic a COM device, an application was created to generate random integer values and send it over a named pipe to the VM’s virtual COM port (this is the powershell console on the left).

As we see in the video at the beginning, if we do not assign COM ports to our container, when the application runs in the container and tries to open a handle to the COM port, it fails with an IOException (because as far as the container knew, the COM port didn’t exist!). On our second run of the container, we assign the COM port to the container and the application successfully gets and prints out the incoming random ints generated by our app running on the host.

How It Works

Let’s look at how it will work in Docker. From a shell, a user will type:

docker run --device="/"

For example, if you wanted to pass a COM port to your container:

docker run --device="class/86E0D1E0-8089-11D0-9CE4-08003E301F73" mcr.microsoft.com/windowsservercore-insider:latest

The value we’re passing to the device argument is simple: it looks for an IdType and an Id. For this coming release of Windows , we only support an IdType of “class”. For Id, this is  a device interface class GUID. The values are delimited by a slash, “/”.  Whereas  in Linux a user assigns individual devices by specifying a file path in the “/dev/” namespace, in Windows we’re adding support for a user to specify an interface class, and all devices which identify as implementing this class   will be plumbed into the container.

If a user wants to specify multiple classes to assign to a container:

docker run --device="class/86E0D1E0-8089-11D0-9CE4-08003E301F73" --device="class/DCDE6AF9-6610-4285-828F-CAAF78C424CC" --device="…" mcr.microsoft.com/windowsservercore-insider:latest

What are the Limitations?

Process isolation only: We only support passing devices to containers running in process isolation; Hyper-V isolation is not supported, nor do we support host device access for Linux Containers on Windows (LCOW).

We support a distinct list of devices: In this release, we targeted enabling a specific set of features and a specific set of host device classes. We’re starting with simple buses. The complete list that we currently support  is  below.

Device Type Interface Class  GUID
GPIO 916EF1CB-8426-468D-A6F7-9AE8076881B3
I2C Bus A11EE3C6-8421-4202-A3E7-B91FF90188E4
COM Port 86E0D1E0-8089-11D0-9CE4-08003E301F73
SPI Bus DCDE6AF9-6610-4285-828F-CAAF78C424CC

Stay tuned for a Part 2 of this blog that explores the architectural decisions we chose to make in Windows to add this support.

What’s Next?

We’re eager to get your feedback. What specific devices are most interesting for you and what workload would you hope to accomplish with them? Are there other ways you’d like to be able to access devices in containers? Leave a comment below or feel free to tweet at me.

Cheers,

Craig Wilhite (@CraigWilhite)

Manage Hyper-V containers and VMs with these best practices

Containers and VMs should be treated as the separate instance types they are, but there are specific management strategies that work for both that admins should incorporate.


Containers and VMs are best suited to different workload types, so it makes sense that IT administrators would…

“;
}
});

/**
* remove unnecessary class from ul
*/
$(“#inlineregform”).find( “ul” ).removeClass(“default-list”);

/**
* Replace “errorMessageInput” class with “sign-up-error-msg” class
*/
function renameErrorMsgClass() {
$(“.errorMessageInput”).each(function() {
if ($(this).hasClass(“hidden”)) {
$(this).removeClass(“errorMessageInput hidden”).addClass(“sign-up-error-msg hidden”);
} else {
$(this).removeClass(“errorMessageInput”).addClass(“sign-up-error-msg”);
}
});
}

/**
* when validation function is called, replace “errorMessageInput” with “sign-up-error-msg”
* before return
*/
function validateThis(v, form) {
var validateReturn = urValidation.validate(v, form);
renameErrorMsgClass();
return validateReturn;
}

/**
* DoC pop-up window js – included in moScripts.js which is not included in responsive page
*/
$(“#inlineRegistration”).on(“click”,”a.consentWindow”, function(e) {
window.open(this.href, “Consent”, “width=500,height=600,scrollbars=1”);
e.preventDefault();
});

use both in their virtual environments, but that adds another layer of complexity to consider.

One of the most notable features introduced in Windows Server 2016 was support for containers. At the time, it seemed that the world was rapidly transitioning away from VMs in favor of containers, so Microsoft had little choice but to add container support to its flagship OS.

Today, organizations use both containers and VMs. But for admins that use a mixture, what’s the best way to manage Hyper-V containers and VMs?

To understand the management challenges of supporting both containers and VMs, admins need to understand a bit about how Windows Server 2016 works. From a VM standpoint, Windows Server 2016 Hyper-V isn’t that different from the version of Hyper-V included with Windows Server 2012 R2. Microsoft introduced a few new features, as with every new release, but the tools and techniques used to create and manage VMs were largely unchanged.

In addition to being able to host VMs, Windows Server 2016 includes native support for two different types of containers: Windows Server containers and Hyper-V containers. Windows Server containers and the container host share the same kernel. Hyper-V containers differ from Windows Server containers in that Hyper-V containers run inside a special-purpose VM. This enables kernel-level isolation between containers and the container host.

Hyper-V management

When Microsoft created Hyper-V containers, it faced something of a quandary with regard to the management interface.

The primary tool for managing Hyper-V VMs is Hyper-V Manager — although PowerShell and System Center Virtual Machine Manager (SCVMM) are also viable management tools. This has been the case ever since the days of Windows Server 2008. Conversely, admins in the open source world used containers long before they ever showed up in Windows, and the Docker command-line interface has become a standard for container management.

Ultimately, Microsoft chose to support Hyper-V Manager as a tool for managing Hyper-V hosts and Hyper-V VMs, but not containers. Likewise, Microsoft chose to support the use of Docker commands for container management.

Management best practices

Although Hyper-V containers and VMs both use the Hyper-V virtualization engine, admins should treat containers and VMs as two completely different types of resources. While it’s possible to manage Hyper-V containers and VMs through PowerShell, most Hyper-V admins seem to prefer using a GUI-based management tool for managing Hyper-V VMs. Native GUI tools, such as Hyper-V Manager and SCVMM, don’t support container management.

As admins work to figure out the best way to manage Hyper-V containers and VMs, it’s important for them to remember that both depend on an underlying host.

Admins who wish to manage their containers through a GUI should consider using one of the many interfaces that are available for Docker. Kitematic is probably the best-known of these interfaces, but there are third-party GUI interfaces for containers that arguably provide a better overall experience.

For example, Datadog offers a dashboard for monitoring Docker containers. Another particularly nice GUI interface for Docker containers is DockStation.

Those who prefer an open source platform should check out the Docker Monitoring Project. This monitoring platform is based on the Kubernetes dashboard, but it has been adapted to work directly with Docker.

As admins work to figure out the best way to manage Hyper-V containers and VMs, it’s important for them to remember that both depend on an underlying host. Although Microsoft doesn’t provide any native GUI tools for managing VMs and containers side by side, admins can use SCVMM to manage all manner of Hyper-V hosts, regardless of whether those servers are hosting Hyper-V VMs or Hyper-V containers.

Admins who have never worked with containers before should spend some time experimenting with containers in a lab environment before attempting to deploy them in production. Although containers are based on Hyper-V, creating and managing containers is nothing like setting up and running Hyper-V VMs. A great way to get started is to install containers on Windows 10.

Dig Deeper on Microsoft Hyper-V management

Kubernetes in Azure eases container deployment duties


With the growing popularity of containers in the enterprise, administrators require assistance to deploy and manage…

“;
}
});

/**
* remove unnecessary class from ul
*/
$(“#inlineregform”).find( “ul” ).removeClass(“default-list”);

/**
* Replace “errorMessageInput” class with “sign-up-error-msg” class
*/
function renameErrorMsgClass() {
$(“.errorMessageInput”).each(function() {
if ($(this).hasClass(“hidden”)) {
$(this).removeClass(“errorMessageInput hidden”).addClass(“sign-up-error-msg hidden”);
} else {
$(this).removeClass(“errorMessageInput”).addClass(“sign-up-error-msg”);
}
});
}

/**
* when validation function is called, replace “errorMessageInput” with “sign-up-error-msg”
* before return
*/
function validateThis(v, form) {
var validateReturn = urValidation.validate(v, form);
renameErrorMsgClass();
return validateReturn;
}

/**
* DoC pop-up window js – included in moScripts.js which is not included in responsive page
*/
$(“#inlineRegistration”).on(“click”,”a.consentWindow”, function(e) {
window.open(this.href, “Consent”, “width=500,height=600,scrollbars=1”);
e.preventDefault();
});

these workloads, particularly in the cloud.

When you consider the growing prevalence of Linux and containers both in Windows Server and in the Azure platform, it makes sense for administrators to get more familiar with how to work with Kubernetes in Azure.

Containers help developers streamline the coding process, while orchestrators give the IT staff a tool to deploy these applications in a cluster. One of the more popular tools, Kubernetes, automates the process of configuring container applications within and on top of Linux across public, private and hybrid clouds.

For companies that prefer to use Azure for container deployments, Microsoft developed the Azure Kubernetes Service (AKS), a hosted control plane, to give administrators an orchestration and cluster management tool for its cloud platform.

Why containers and why Kubernetes?

There are many advantages to containers. Because they share an operating system, containers are lighter than virtual machines (VMs). Patching containers is less onerous than it is for VMs; the administrator just swaps out the base image.

On the development side, containers are more convenient. Containers are not reliant on underlying infrastructure and file systems, so they can move from operating system to operating system without issue.

Kubernetes makes working with containers easier. Most organizations choose containers because they want to virtualize applications and produce them quickly, integrate them with continuous delivery and DevOps style work, and provide them isolation and security from each other.

For many people, Kubernetes represents a container platform where they can run apps, but it can do more than that. Kubernetes is a management environment that handles compute, networking and storage for containers.

Kubernetes acts as much as a PaaS provider as an IaaS, and it also deftly handles moving containers across different platforms. Kubernetes organizes clusters of Linux hosts that run containers, turns them off and on, moves them around hosts, configures them via declarative statements and automates provisioning.

Using Kubernetes in Azure

Clusters are sets of VMs designed to run containerized applications. A cluster holds a master VM and agent nodes or VMs that host the containers.

Microsoft calls AKS self-healing, which means the platform will recover from infrastructure problems automatically.

AKS limits the administrative workload that would be required to run this type of cluster on premises. AKS shares the container workload across the nodes in the cluster and redistributes resources when adding or removing nodes. Azure automatically upgrades and patches AKS.

Microsoft calls AKS self-healing, which means the platform will recover from infrastructure problems automatically. Like other cloud services, Microsoft only charges for the agent pool nodes that run.

Starting up Kubernetes in Azure

The simplest way to provision a new instance of an AKS cluster is to use Azure Cloud Shell, a browser-based command-line environment for working with Azure services and resources.

Azure Cloud Shell works like the Azure CLI, except it’s updated automatically and is available from a web browser. There are many service provider plug-ins enabled by default in the shell.

Azure Cloud Shell session
Starting a PowerShell session in the Azure Cloud Shell

Open Azure Cloud Shell at shell.azure.com. Choose PowerShell and sign in to the account with your Azure subscription. When the session starts, complete the provider registration with these commands:

az provider register -n Microsoft.Network
az provider register -n Microsoft.Storage
az provider register -n Microsoft.Compute
az provider register -n Microsoft.ContainerService

[embedded content]

How to create a Kubernetes cluster on Azure

Next, create a resource group, which will contain the Azure resources in the AKS cluster.

az group create –name AKSCluster –location centralus

Use the following command to create a cluster named AKSCluster1 that will live in the AKSCluster resource group with two associated nodes:

az aks create –resource-group AKSCluster –name AKSCluster1 –node-count 2 –generate-ssh-keys

Next, to use the Kubernetes command-line tool kubectl to control the cluster, get the necessary credentials:

az aks get-credentials –resource-group AKSCluster –name AKSCluster1

Next, use kubectl to list your nodes:

kubectl get nodes

Put the cluster into production with a manifest file

After setting up the cluster, load the applications. You’ll need a manifest file that dictates the cluster’s runtime configuration, the containers to run on the cluster and the services to use.

Developers can create this manifest file along with the appropriate container images and provide them to your operations team, who will import them into Kubernetes or clone them from GitHub and point the kubectl utility to the relevant manifest.

To get more familiar with Kubernetes in Azure, Microsoft offers a tutorial to build a web app that lets people vote for either cats or dogs. The app runs on a couple of container images with a front-end service.

OpenShift on OpenStack aims to ease VM, container management

Virtualization admins increasingly use containers to secure applications, but managing both VMs and containers in the same infrastructure presents some challenges.

IT can spin containers up and down more quickly than VMs, and they require less overhead, so there are several practical uses cases for the technology. Security can be a concern, however, because all containers share the same underlying OS. As such, mission-critical applications are still better suited to VMs.

Using both containers and VMs can be helpful, because they each have their place. Still, adding containers to a traditional virtual infrastructure adds another layer of complexity and management for admins to contend with. The free and open source OpenStack provides infrastructure as a service and VM management, and organizations can run Red Hat’s OpenShift on OpenStack — and other systems — for platform as a service and container management.

Here, Brian Gracely, director of OpenShift product strategy at Red Hat, based in Raleigh, N.C., explains how to manage VMs and containers, and he shares how OpenShift on OpenStack can help.

What are the top challenges of managing both VMs and containers in virtual environments?

Brian Gracely, director of OpenShift product strategy at Red HatBrian Gracely

Brian Gracely: The first one is really around people and existing processes. You have infrastructure teams who, over the years, have become very good at managing VMs and … replicating servers with VMs, and they’ve built a set of operational things around that. When we start having the operations team deal with containers, a couple of things are different. Not all of them are as fluent in Linux as you might expect; containers are [based on] the OS. A lot of the virtualization people, especially in the VMware world, came from a Windows background. So, they have to learn a certain amount about what to do with the OS and how to deal with Linux constructs and commands.

Container environments tend to be more closely tied to people who are doing application developments. Application developers are … making changes to the application more frequently and scaling them up and down. The concept of the environment changing more frequently is sort of new for VM admins.

What is the role of OpenStack in modern data centers where VMs and containers coexist?

Gracely: OpenStack can become either an augmentation of what admins used to do with VMware or a replacement for VMware that gives them all of the VM capabilities they want to have in terms of networking, storage and so forth. In most of those cases, they want to also have hybrid capabilities, across public and private. And they can use OpenShift on OpenStack as that abstraction layer that allows them to run containerized applications and/or VM applications in their own data center.

Then, they’ll run OpenShift in one of the public clouds — Amazon or Azure or Google — and the applications that run in the cloud will end up being containerized on OpenShift. It gives them consistency from what the operations look like, and then there’s a pretty simple way of determining which applications can also run in the public cloud, if necessary.

What OpenShift features are most important to container management?

Gracely: OpenShift is based on Kubernetes technology — the de facto standard for managing containers.

If you’re a virtualization person … it’s essentially like vCenter for containers. It centrally manages policies, it centrally manages deployments of containers, [and] it makes sure that you use your compute resources really efficiently. If a container dies, an application dies, it’s going to be constantly monitoring that and will restart it automatically. Kubernetes at the core of OpenShift is the thing that allows people to manage containers at scale, as opposed to managing them one by one.

What can virtualization admins do to improve their container management skills?

Gracely: Become Linux-literate, Linux-skilled. There are plenty of courses out there that allow you to get familiar with Linux. Container technology, fundamentally, is Linux technology, so that’s a fundamental thing. There are tools like Katacoda, which is an online training system; you just go in through your browser. It gives you a Kubernetes environment to play around with, and there’s also an OpenShift set of trainings and tools that are on there.

Kubernetes is the thing that allows people to manage containers at scale, as opposed to managing them one by one.
Brian Gracelydirector of OpenShift product strategy at Red Hat

How can admins streamline management practices between other systems for VMs and OpenShift for containers?

Gracely: OpenShift runs natively on top of both VMware and OpenStack, so for customers that just want to stay focused on VMs, their world can look pretty much the way it does today. They’re going to provision however many VMs they need, and then give self-service access to the OpenShift platform and allow their developers to place containers on there as necessary. The infrastructure team can simply make sure that it’s highly available, that it’s patched, and if more capacity is necessary, add VMs.

Where we see … things get more efficient is people who don’t want to have silos anymore between the ops team and the development team. They’re either going down a DevOps path or combining them together; they want to merge processes. This is where we see them doing much more around automating environments. So, instead of just statically [building] a bunch of VMs and leaving them alone, they’re using tools like Ansible to provision not only the VMs, but the applications that go on top of those VMs and the local database.

Will VMs and containers continue to coexist, or will containers overtake VMs in the data center?

Gracely: More and more so, we’re seeing customers taking a container-first approach with new applications. But … there’s always going to be a need for good VM management, being able to deliver high performance, high I/O stand-alone applications in VMs. We very much expect to see a lot of applications stay in VMs, especially ones that people don’t expect to need any sort of hybrid cloud environment for, some large databases for I/O reasons, or [applications that], for whatever reason, people don’t want to put in containers. Then, our job is to make sure that, as containers come in, that we can make that one seamless infrastructure.

Windows Server containers and Hyper-V containers explained

A big draw of Windows Server 2016 is the addition of containers that provide similar capabilities to those from…

“;
}
});

/**
* remove unnecessary class from ul
*/
$(“#inlineregform”).find( “ul” ).removeClass(“default-list”);

/**
* Replace “errorMessageInput” class with “sign-up-error-msg” class
*/
function renameErrorMsgClass() {
$(“.errorMessageInput”).each(function() {
if ($(this).hasClass(“hidden”)) {
$(this).removeClass(“errorMessageInput hidden”).addClass(“sign-up-error-msg hidden”);
} else {
$(this).removeClass(“errorMessageInput”).addClass(“sign-up-error-msg”);
}
});
}

/**
* when validation function is called, replace “errorMessageInput” with “sign-up-error-msg”
* before return
*/
function validateThis(v, form) {
var validateReturn = urValidation.validate(v, form);
renameErrorMsgClass();
return validateReturn;
}

/**
* DoC pop-up window js – included in moScripts.js which is not included in responsive page
*/
$(“#inlineRegistration”).on(“click”,”a.consentWindow”, function(e) {
window.open(this.href, “Consent”, “width=500,height=600,scrollbars=1”);
e.preventDefault();
});

leading open source providers. This Microsoft platform actually offers two different types of containers: Windows Server containers and Hyper-V containers. Before you decide which option best meets your needs, take a look at these five quick tips so you have a better understanding of container architecture, deployment and performance management.

Windows Server containers vs. Hyper-V containers

Although Windows Server containers and Hyper-V containers do the same thing and are managed the same way, the level of isolation they provide is different. Windows Server containers share the underlying OS kernel, which makes them smaller than VMs because they don’t each need a copy of the OS. Security can be a concern, however, because if one container is compromised, the OS and all of the other containers could be at risk.

Hyper-V containers and their dependencies reside in Hyper-V VMs and provide an additional layer of isolation. For reference, Hyper-V containers and Hyper-V VMs have different use cases. Containers are typically used for microservices and stateless applications because they are deposable by design and, as such, don’t store persistent data. Hyper-V VMs, typically equipped with virtual hard disks, are better suited to mission-critical applications.

The role of Docker on Windows Server

One key advantage of Docker on Windows is support for container image automation.

In order to package, deliver and manage Windows container images, you need to download and install Docker on Windows Server 2016. Docker Swarm, supported by Windows Server, provides orchestration features that help with cluster creation and workload scheduling. After you install Docker, you’ll need to configure it for Windows, a process that includes selecting secured connections and setting disk paths.

One key advantage of Docker on Windows is support for container image automation. You can use container images for continuous integration cycles because they’re stored as code and can be quickly recreated when need be. You can also download and install a module to extend PowerShell to manage Docker Engine; just make sure you have the latest versions of both Windows and PowerShell before you do so.

Meet Hyper-V container requirements

If you prefer to use Hyper-V containers, make sure you have Server Core or Windows Server 2016 installed, along with the Hyper-V role. There is also a list of minimum resource requirements necessary to run Hyper-V containers. First, you need at least 4 GB of memory for the host VM. You also need a processor with Intel VT-x and at least two virtual processors for the host VM. Unfortunately, nested virtualization doesn’t support Advanced Micro Devices yet.

Although these requirements might not seem extensive, it’s important to carefully consider resource allocation and the workloads you intend to run on Hyper-V containers before deployment. When it comes to container images, you have two different options: a Windows Server Core image and a Nano Server image.

OS components affect both container types

Portability is a key advantage of containers. Because an application and all its dependencies are packaged within the container, it should be easy to deploy on other platforms. Unfortunately, there are different elements that can negatively affect this deployment flexibility. While containers share the underlying OS kernel, they do contain their own OS components, also known as the OS layer. If these components don’t match up with the OS kernel running on the host, the container will most likely be blocked.

The four-level version notation system Microsoft uses includes the major, minor, build and revision levels. Before Windows Server containers or Hyper-V containers will run on a Windows Server host, the major, minor and build levels must match, at minimum. The containers will still start if the revision level doesn’t match, but they might not work properly.

Antimalware tools and container performance

Because of shared components, like those of the OS layer, antimalware tools can affect container performance. The components or layers are shared through the use of placeholders; when those placeholders are read, the reads are redirected to the underlying component. If the container modifies a component, the container replaces the placeholder with the modified one.

Antimalware tools aren’t aware of the redirection and don’t know which components are placeholders and which components are modified, so the same components end up being scanned multiple times. Fortunately, there is a way to make antimalware tools aware of this activity. You can modify the container volume by attaching a parameter to the Create CallbackData flag and checking the Exchange Control Panel (ECP) redirection flags. ECP will then either indicate that the file was opened from a remote layer or that the file was opened from a local layer.

Next Steps

Ensure container isolation and prevent root access

Combine microservices and containers

Maintain high availability with containers and data mirroring

Microsoft empowers Windows containers to erase OS divide

Linux and Windows used to be like oil and water, but new approaches to Windows containers will find a way to blend them.

Containers are increasingly important to achieve DevOps velocity. Containerization pulls provisioning and configuration management data into the container image so apps don’t look to the host OS for that information. As intelligence gets baked into containers, the host OS becomes less important to manage the environment.

Windows containers must still catch up to the state of the art in Linux, where containers originated. But as the technology develops, traditionally separate Linux and Windows operations will begin to meld behind container orchestration software.

“Whether you’re on a Linux or Windows environment, you don’t have to worry as much about how you’re building the server,” said Andy Domeier, director of technology operations at SPS Commerce, a communications network for supply chain and logistics businesses based in Minneapolis. “You can use the same host image across [OS] platforms because you don’t have to worry about individual configuration needs for apps.”

Microsoft, which saw this trend coming, has jettisoned its Windows-only focus. Recent versions of products such as Azure Container Instances (ACI), ASP.NET Core 2016, Azure Web App for Containers, and the Azure App Service all support Linux and containers. Docker’s Enterprise Edition release in August 2017 also added support for clusters and multi-tier apps made up of a mix of Linux and Windows components.

“[Microsoft is] a cloud operating system company and all their focus is on selling compute and storage in Azure,” said Chris Riley, DevOps analyst at Fixate IO, a content strategy consulting firm based in Livermore, Calif., and a TechTarget contributor. “If Microsoft embraces open source and makes it easier to develop, then it’s going to get easier to get applications into Azure and increase adoption.”

If Microsoft embraces open source and makes it easier to develop, then it’s going to get easier to get applications into Azure and increase adoption.
Chris RileyDevOps analyst, Fixate IO

Longtime Windows shops said Linux may be the more efficient choice for future container-based app development pipelines.

“We’re looking to break down some of our bigger apps into microservices to run on OpenShift Origin [container management platform], and for those microservices we’re looking into Linux-based databases,” said Aloisio Rocha, operations specialist at NetEnt, an online gaming systems service provider in Sweden. “We’ve found that, for a large amount of data, we don’t need all the functionality that comes with Windows, and Windows databases are a bit too bloated.”

For Microsoft, containers recapture lost time

OS convergence is likely, especially among IT organizations that want DevOps on Microsoft systems, but there’s still work to do if Windows containers are to achieve parity with Linux. Microsoft is comparatively late to the containers game, with Docker container support first generally available in Windows Server 2016. Windows support for container orchestration platforms such as Kubernetes and ACI remains in the preview stage.

“Microsoft is trying to do progressive things and they’re taking the right kind of steps,” said Brandon Cipes, managing director of DevOps at cPrime, an Agile consulting firm in Foster City, Calif. “But if you want a good indicator of how containers are still hard for them, ACI is debuting on Linux and not Windows.”

Microsoft’s strategy for different versions of the Windows OS has also been in flux as the company strives to make Windows container images more efficient. The company revealed in August 2017 that it will no longer support its Nano Server operating system on host servers, but reserve the micro OS for use as a base image inside containers, while Server Core serves as the primary Windows OS for hosts.

“By removing components relating to hardware, and operating system parts which are not needed inside a container, Microsoft thinks it can make the Nano Server container smaller,” explained Thomas Maurer, cloud architect for Switzerland-based ItnetX, a consulting firm that works with large enterprise clients in Europe. “This will make the startup of the containers faster and cut down on their resource consumption.”

This is another area where Windows containers must catch up to their open source counterparts — several container-specific micro-operating systems are already generally available for Linux.

Beth Pariseau is senior news writer for TechTarget’s Data Center and Virtualization Media Group. Write to her at bpariseau@techtarget.com or follow @PariseauTT on Twitter.

Use NGINX to load balance across your Docker Swarm cluster

A practical walkthrough, in six steps

This basic example demonstrates NGINX and swarm mode in action, to provide the foundation for you to apply these concepts to your own configurations.

This document walks through several steps for setting up a containerized NGINX server and using it to load balance traffic across a swarm cluster. For clarity, these steps are designed as an end-to-end tutorial for setting up a three node cluster and running two docker services on that cluster; by completing this exercise, you will become familiar with the general workflow required to use swarm mode and to load balance across Windows Container endpoints using an NGINX load balancer.

The basic setup

This exercise requires three container hosts–two of which will be joined to form a two-node swarm cluster, and one which will be used to host a containerized NGINX load balancer. In order to demonstrate the load balancer in action, two docker services will be deployed to the swarm cluster, and the NGINX server will be configured to load balance across the container instances that define those services. The services will both be web services, hosting simple content that can be viewed via web browser. With this setup, the load balancer will be easy to see in action, as traffic is routed between the two services each time the web browser view displaying their content is refreshed.

The figure below provides a visualization of this three-node setup. Two of the nodes, the “Swarm Manager” node and the “Swarm Worker” node together form a two-node swarm mode cluster, running two Docker web services, “S1” and “S2”. A third node (the “NGINX Host” in the figure) is used to host a containerized NGINX load balancer, and the load balancer is configured to route traffic across the container endpoints for the two container services. This figure includes example IP addresses and port numbers for the two swarm hosts and for each of the six container endpoints running on the hosts.

configuration

System requirements

Three* or more computer systems running Windows 10 Creators Update (available today for members of the Windows Insiders program), setup as a container host (see the topic, Windows Containers on Windows 10 for more details on how to get started with Docker containers on Windows 10).

Additionally, each host system should be configured with the following:

  • The microsoft/windowsservercore container image
  • Docker Engine v1.13.0 or later
  • Open ports: Swarm mode requires that the following ports be available on each host.
    • TCP port 2377 for cluster management communications
    • TCP and UDP port 7946 for communication among nodes
    • TCP and UDP port 4789 for overlay network traffic

*Note on using two nodes rather than three:
These instructions can be completed using just two nodes. However, currently there is a known bug on Windows which prevents containers from accessing their hosts using localhost or even the host’s external IP address (for more background on this, see Caveats and Gotchas below). This means that in order to access docker services via their exposed ports on the swarm hosts, the NGINX load balancer must not reside on the same host as any of the service container instances.
Put another way, if you use only two nodes to complete this exercise, one of them will need to be dedicated to hosting the NGINX load balancer, leaving the other to be used as a swarm container host (i.e. you will have a single-host swarm cluster and a host dedicated to your containerized NGINX load balancer).

Step 1: Build an NGINX container image

In this step, we’ll build the container image required for your containerized NGINX load balancer. Later we will run this image on the host that you have designated as your NGINX container host.

Note: To avoid having to transfer your container image later, complete the instructions in this section on the container host that you intend to use for your NGINX load balancer.

NGINX is available for download from nginx.org. An NGINX container image can be built using a simple Dockerfile that installs NGINX onto a Windows base container image and configures the container to run as an NGINX executable. For the purpose of this exercise, I’ve made a Dockerfile downloadable from my personal GitHub repo; access the NGINX Dockerfile here, then save it to some location (e.g. C:tempnginx) on your NGINX container host machine. From that location, build the image using the following command:

C:tempnginx> docker build -t nginx .

Now the image should appear with the rest of the docker images on your system (you can check this using the docker images command).

(Optional) Confirm that your NGINX image is ready

First, run the container:

C:temp> docker run -it -p 80:80 nginx

Next, open a new cmdlet window and use the docker ps command to see that the container is running. Note its ID. The ID of your container is the value of <CONTAINERID> in the next command.

Get the container’s IP address:

C:temp> docker exec <CONTAINERID> ipconfig

For example, your container’s IP address may be 172.17.176.155, as in the example output shown below.

nginxipconfig

Next, open a browser on your container host and put your container’s IP address in the address bar. You should see a confirmation page, indicating that NGINX is successfully running in your container.

nginxconfirmation

 

Step 2: Build images for two containerized IIS Web services

In this step, we’ll build container images for two simple IIS-based web applications. Later, we’ll use these images to create two docker services.

Note: Complete the instructions in this section on one of the container hosts that you intend to use as a swarm host.

Build a generic IIS Web Server image

On my personal GitHub repo, I have made a Dockerfile available for creating an IIS Web server image. The Dockerfile simply enables the Internet Information Services (IIS) Web server role within a microsoft/windowsservercore container. Download the Dockerfile from here, and save it to some location (e.g. C:tempiis) on one of the host machines that you plan to use as a swarm node. From that location, build the image using the following command:

 C:tempiis> docker build -t iis-web .

(Optional) Confirm that your IIS Web server image is ready

First, run the container:

 C:temp> docker run -it -p 80:80 iis-web

Next, use the docker ps command to see that the container is running. Note its ID. The ID of your container is the value of <CONTAINERID> in the next command.

Get the container’s IP address:

C:temp> docker exec <CONTAINERID> ipconfig

Now open a browser on your container host and put your container’s IP address in the address bar. You should see a confirmation page, indicating that the IIS Web server role is successfully running in your container.

iisconfirmation

Build two custom IIS Web server images

In this step, we’ll be replacing the IIS landing/confirmation page that we saw above with custom HTML pages–two different pages, corresponding to two different web container images. In a later step, we’ll be using our NGINX container to load balance across instances of these two images. Because the images will be different, we will easily see the load balancing in action as it shifts between the content being served by the containers instances of the two images.

First, on your host machine create a simple file called, index_1.html. In the file type any text. For example, your index_1.html file might look like this:

index1

Now create a second file, index_2.html. Again, in the file type any text. For example, your index_2.html file might look like this:

index2

Now we’ll use these HTML documents to make two custom web service images.

If the iis-web container instance that you just built is not still running, run a new one, then get the ID of the container using:

C:temp> docker exec <CONTAINERID> ipconfig

Now, copy your index_1.html file from your host onto the IIS container instance that is running, using the following command:

C:temp> docker cp index_1.html <CONTAINERID>:C:inetpubwwwrootindex.html

Next, stop and commit the container in its current state. This will create a container image for the first web service. Let’s call this first image, “web_1.”

C:> docker stop <CONTAINERID>
C:> docker commit <CONTAINERID> web_1

Now, start the container again and repeat the previous steps to create a second web service image, this time using your index_2.html file. Do this using the following commands:

C:> docker start <CONTAINERID>
C:> docker cp index_2.html <CONTAINERID>:C:inetpubwwwrootindex.html
C:> docker stop <CONTAINERID>
C:> docker commit <CONTAINERID> web_2

You have now created images for two unique web services; if you view the Docker images on your host by running docker images, you should see that you have two new container images—“web_1” and “web_2”.

Put the IIS container images on all of your swarm hosts

To complete this exercise you will need the custom web container images that you just created to be on all of the host machines that you intend to use as swarm nodes. There are two ways for you to get the images onto additional machines:

  • Option 1: Repeat the steps above to build the “web_1” and “web_2” containers on your second host.
  • Option 2 [recommended]: Push the images to your repository on Docker Hub then pull them onto additional hosts.

A note on Docker Hub:
Using Docker Hub is a convenient way to leverage the lightweight nature of containers across all of your machines, and to share your images with others. Visit the following Docker resources to get started with pushing/pulling images with Docker Hub:
Create a Docker Hub account and repository
Tag, push and pull your image

 

Step 3: Join your hosts to a swarm

As a result of the previous steps, one of your host machines should have the nginx container image, and the rest of your hosts should have the Web server images, “web_1” and “web_2”. In this step, we’ll join the latter hosts to a swarm cluster.

Note: The host running the containerized NGINX load balancer cannot run on the same host as any container endpoints for which it is performing load balancing; the host with your nginx container image must be reserved for load balancing only. For more background on this, see Caveats and Gotchas below.

First, run the following command from any machine that you intend to use as a swarm host. The machine that you use to execute this command will become a manager node for your swarm cluster.

  • Replace <HOSTIPADDRESS> with the public IP address of your host machine
C:temp> docker swarm init --advertise-addr=<HOSTIPADDRESS> --listen-addr <HOSTIPADDRESS>:2377

Now run the following command from each of the other host machines that you intend to use as swarm nodes, joining them to the swarm as a worker nodes.

  • Replace <MANAGERIPADDRESS> with the public IP address of your host machine (i.e. the value of <HOSTIPADDRESS> that you used to initialize the swarm from the manager node)
  • Replace <WORKERJOINTOKEN> with the worker join-token provided as output by the docker swarm init command (you can also obtain the join-token by running docker swarm join-token worker from the manager host)
C:temp> docker swarm join --token <WORKERJOINTOKEN> <MANAGERIPADDRESS>:2377

Your nodes are now configured to form a swarm cluster! You can see the status of the nodes by running the following command from your manage node:

C:temp> docker node ls

Step 4: Deploy services to your swarm

Note: Before moving on, stop and remove any NGINX or IIS containers running on your hosts. This will help avoid port conflicts when you define services. To do this, simply run the following commands for each container, replacing <CONTAINERID> with the ID of the container you are stopping/removing:

C:temp> docker stop <CONTAINERID>
C:temp> docker rm <CONTAINERID>

Next, we’re going to use the “web_1” and “web_2” container images that we created in previous steps of this exercise to deploy two container services to our swarm cluster.

To create the services, run the following commands from your swarm manager node:

C: > docker service create --name=s1 --publish mode=host,target=80 --endpoint-mode dnsrr web_1 powershell -command {echo sleep; sleep 360000;}
C: > docker service create --name=s2 --publish mode=host,target=80 --endpoint-mode dnsrr web_2 powershell -command {echo sleep; sleep 360000;}

You should now have two services running, s1 and s2. You can view their status by running the following command from your swarm manager node:

C: > docker service ls

Additionally, you can view information on the container instances that define a specific service with the following commands (where <SERVICENAME> is replaced with the name of the service you are inspecting (for example, s1 or s2):

# List all services
C: > docker service ls
# List info for a specific service
C: > docker service ps <SERVICENAME>

(Optional) Scale your services

The commands in the previous step will deploy one container instance/replica for each service, s1 and s2. To scale the services to be backed by multiple replicas, run the following command:

C: > docker service scale <SERVICENAME>=<REPLICAS>
# e.g. docker service scale s1=3

Step 5: Configure your NGINX load balancer

Now that services are running on your swarm, you can configure the NGINX load balancer to distribute traffic across the container instances for those services.

Of course, generally load balancers are used to balance traffic across instances of a single service, not multiple services. For the purpose of clarity, this example uses two services so that the function of the load balancer can be easily seen; because the two services are serving different HTML content, we’ll clearly see how the load balancer is distributing requests between them.

The nginx.conf file

First, the nginx.conf file for your load balancer must be configured with the IP addresses and service ports of your swarm nodes and services. An example nginx.conf file was included with the NGINX download that was used to create your nginx container image in Step 1. For the purpose of this exercise, I copied and adapted the example file provided by NGINX and used it to create a simple template for you to adapt with your specific node/container information.

Download the nginx.conf file template that I prepared for this exercise from my personal GitHub repo, and save it onto your NGINX container host machine. In this step, we’ll adapt the template file and use it to replace the default nginx.conf file that was originally downloaded onto your NGINX container image.

You will need to adjust the file by adding the information for your hosts and container instances. The template nginx.conf file provided contains the following section:

upstream appcluster {
     server <HOSTIP>:<HOSTPORT>;
     server <HOSTIP>:<HOSTPORT>;
     server <HOSTIP>:<HOSTPORT>;
     server <HOSTIP>:<HOSTPORT>;
     server <HOSTIP>:<HOSTPORT>;
     server <HOSTIP>:<HOSTPORT>;
 }

To adapt the file for your configuration, you will need to adjust the <HOSTIP>:<HOSTPORT> entries in the template config file. You will have an entry for each container endpoint that defines your web services. For any given container endpoint, the value of <HOSTIP> will be the IP address of the container host upon which that container is running. The value of <HOSTPORT> will be the port on the container host upon which the container endpoint has been published.

When the services, s1 and s2, were defined in the previous step of this exercise, the --publish mode=host,target=80 parameter was included. This parameter specified that the container instances for the services should be exposed via published ports on the container hosts. More specifically, by including --publish mode=host,target=80 in the service definitions, each service was configured to be exposed on port 80 of each of its container endpoints, as well as a set of automatically defined ports on the swarm hosts (i.e. one port for each container running on a given host).

First, identify the host IPs and published ports for your container endpoints

Before you can adjust your nginx.conf file, you must obtain the required information for the container endpoints that define your services. To do this, run the following commands (again, run these from your swarm manager node):

C: > docker service ps s1
C: > docker service ps s2

The above commands will return details on every container instance running for each of your services, across all of your swarm hosts.

  • One column of the output, the “ports” column, includes port information for each host of the form *:<HOSTPORT>->80/tcp. The values of <HOSTPORT> will be different for each container instance, as each container is published on its own host port.
  • Another column, the “node” column, will tell you which machine the container is running on. This is how you will identify the host IP information for each endpoint.

You now have the port information and node for each container endpoint. Next, use that information to populate the upstream field of your nginx.conf file; for each endpoint, add a server to the upstream field of the file, replacing the field with the IP address of each node (if you don’t have this, run ipconfig on each swarm host machine to obtain it), and the field with the corresponding host port.

For example, if you have two swarm hosts (IP addresses 172.17.0.10 and 172.17.0.11), each running three containers your list of servers will end up looking something like this:

upstream appcluster {
     server 172.17.0.10:21858;
     server 172.17.0.11:64199;
     server 172.17.0.10:15463;
     server 172.17.0.11:56049;
     server 172.17.0.11:35953;
     server 172.17.0.10:47364;
}

Once you have changed your nginx.conf file, save it. Next, we’ll copy it from your host to the NGINX container image itself.

Replace the default nginx.conf file with your adjusted file

If your nginx container is not already running on its host, run it now:

C:temp> docker run -it -p 80:80 nginx

Next, open a new cmdlet window and use the docker ps command to see that the container is running. Note its ID. The ID of your container is the value of <CONTAINERID> in the next command.

Get the container’s IP address:

C:temp> docker exec <CONTAINERID> ipconfig

With the container running, use the following command to replace the default nginx.conf file with the file that you just configured (run the following command from the directory in which you saved your adjusted version of the nginx.conf on the host machine):

C:temp> docker cp nginx.conf <CONTAINERID>:C:nginxnginx-1.10.3conf

Now use the following command to reload the NGINX server running within your container:

C:temp> docker exec <CONTAINERID> nginx.exe -s reload

Step 6: See your load balancer in action

Your load balancer should now be fully configured to distribute traffic across the various instances of your swarm services. To see it in action, open a browser and

  • If accessing from the NGINX host machine: Type the IP address of the nginx container running on the machine into the browser address bar. (This is the value of <CONTAINERID> above).
  • If accessing from another host machine (with network access to the NGINX host machine): Type the IP address of the NGINX host machine into the browser address bar.

Once you’ve typed the applicable address into the browser address bar, press enter and wait for the web page to load. Once it loads, you should see one of the HTML pages that you created in step 2.

Now press refresh on the page. You may need to refresh more than once, but after just a few times you should see the other HTML page that you created in step 2.

If you continue refreshing, you will see the two different HTML pages that you used to define the services, web_1 and web_2, being accessed in a round-robin pattern (round-robin is the default load balancing strategy for NGINX, but there are others). The animated image below demonstrated the behavior that you should see.

As a reminder, below is the full configuration with all three nodes. When you’re refreshing your web page view, you’re repeatedly accessing the NGINX node, which is distributing your GET request to the container endpoints running on the swarm nodes. Each time you resend the request, the load balancer has the opportunity to route you to a different endpoint, resulting in your being served a different web page, depending on whether your request was routed to an S1 or S2 endpoint.

configuration_full

Caveats and gotchas

Q: Is there a way to publish a single port for my service, so that I can load balance across each of my services rather than each of the individual endpoints for my services?

Unfortunately, we do not yet support publishing a single port for a service on Windows. This feature is swarm mode’s routing mesh feature—a feature that allows you to publish ports for a service, so that that service is accessible to external resources via that port on every swarm node.

Routing mesh for swarm mode on Windows is not yet supported, but will be coming soon.

 

Q: Why can’t I run my containerized load balancer on one of my swarm nodes?

Currently, there is a known bug on Windows, which prevents containers from accessing their hosts using localhost or even the host’s external IP address. This means containers cannot access their host’s exposed ports—the can only access exposed ports on other hosts.

In the context of this exercise, this means that the NGINX load balancer must be running on its own host, and never on the same host as any services that it needs to access via exposed ports. Put another way, for the containerized NGINX load balancer to balance across the two web services defined in this exercise, s1 and s2, it cannot be running on a swarm node—if it were running on a swarm node, it would be unable to access any containers on that node via host exposed ports.

Of course, an additional caveat here is that containers do not need to be accessed via host exposed ports. It is also possible to access containers directly, using the container IP and published port. If this instead were done for this exercise, the NGINX load balancer would need to be configured to:

  • Access containers that share its host by their container IP and port
  • Access containers that do not share its host by their host’s IP and exposed port

There is no problem with configuring the load balancer in this way, other than the added complexity that it introduces compared to simply putting the load balancer on its own machine, so that containers can be uniformly accessed via their host’s IPs and exposed ports.