Tag Archives: Docker

Docker MTA program helps enterprises target IBM Cloud

With digital transformation in mind, IBM recently beefed up its relationship with Docker to not only containerize and bring existing applications to the cloud but to make them smarter.

Both Docker and IBM have been lining up partnerships to gain an advantage in the lucrative application modernization market.

The goal of the expanded relationship is to make it easier for enterprises to modernize their existing applications. To accomplish this, Big Blue is combining IBM Cloud with Docker Enterprise Edition and other IBM software and services, said Jason McGee, vice president and CTO of IBM Cloud Platform, in a blog post recently.

“As we continue to build on our partnership with Docker, IBM’s ultimate goal is to help our clients modernize and extend their existing applications by moving them to the cloud as easily as possible,” McGee  told TechTarget. “That’s why this work is focused on helping developers quickly convert existing workloads into containers, giving them portability across different systems and cloud platforms. This also enables them to take advantage of the most valuable services the cloud has to offer, such as Watson, machine learning and blockchain, to enhance their applications with new capabilities and experiences.”

Three main points

McGee said the expanded IBM-Docker partnership is focused on three main points: using Docker Enterprise Edition (EE) to containerize workloads and run them on IBM Cloud; bringing IBM into the Docker Modernize Traditional Applications (MTA) program; and making certified IBM software available in the Docker Store.

IBM’s alliance with Docker is the latest in this surge in activities among cloud, platform and infrastructure providers to establish well-formed container-orchestration strategies as part of their hybrid cloud offerings.
Charlotte Dunlapanalyst, GlobalData

“All the major ISVs are putting emphasis on the cloud and with IBM Cloud, one of their differentiators is not just that they have business-critical cloud, but this idea that there’s a way to make traditional applications smarter without having to change the core application itself,” said David Messina, chief marketing officer at Docker. “It’s very compelling to IBM enterprises that are looking at digital transformation and wondering where to start and thinking they have to throw the baby out with the bath water.”

Instead, the model that Docker and IBM are presenting offers a clear, deterministic path where enterprises can make “stepwise improvements” without having to radically change their legacy applications, he said.

IBM Cloud services

With Docker EE for IBM Cloud, developers can migrate applications to the IBM Cloud and integrate them with IBM services such as the Watson artificial intelligence services.

In addition to the Watson AI capabilities, IBM offers services such as blockchain, internet of things support, analytics offerings, serverless computing and quantum computing, among others.

“Cloud providers are preparing for battle in their quest to become the cloud of choice,” said Charlotte Dunlap, an analyst at GlobalData. “They’ll accomplish this through key alliances and adoption of leading OSS [open source] technologies such as Docker and Kubernetes. IBM’s alliance with Docker is the latest in this surge in activities among cloud, platform and infrastructure providers to establish well-formed container-orchestration strategies as part of their hybrid cloud offerings.”

Charles King, principal analyst at Pund-IT, said he believes that working with Docker is a good example of how IBM actively avoids disruption by embracing disruptive technologies. Indeed, “the company has done just that for the past two decades, beginning with its backing of Linux and continuing through a litany of support for other open source projects and relationships with sometimes counterintuitive partners.”

King noted that there is irony in that IBM is working with Docker to “modernize existing applications,” because that phrase is used by IBM competitors to ding Big Blue.

“You often see it applied to services and solutions designed to migrate enterprises away from IBM legacy platforms,” he said. “In this case, IBM is actively embracing self-disruption by underscoring the value customers can realize from implementing containers and working with Docker to minimize the pain and maximize the value of that process.”

The Docker MTA program

Docker and IBM announced their expanded relationship at the DockerCon EU in Copenhagen on Oct. 18. At DockerCon 2017 in Austin, Texas, last April, Docker launched its MTA program to help enterprises modernize legacy applications and move them to the cloud.

“Legacy applications, anchored to on-premises data centers, represent more than 90% of enterprise applications deployed today and on average account for 80% of IT budgets,” said Scott Johnston, COO of Docker, in a statement in April.

At the launch of the Docker MTA program, Docker announced partnerships with Avanade, Cisco, HPE and Microsoft. Accenture and Booz Allen Hamilton are also partners. Now Docker has added IBM as an MTA partner.

The Docker MTA program has helped customers like Northern Trust speed up application deployment velocity. Indeed, under the Docker MTA program, Northern Trust’s Enterprise Technology team was able to provision applications up to four times faster than before using the program.

“This speed of deployment will directly benefit traditional applications and support our overall adoption of enterprise Agile, allowing us to roll out services to our clients more rapidly,” said Scott Murray, CTO of Northern Trust, in a statement.

Meanwhile, McGee said IBM is publishing IBM software in the Docker store, including WebSphere Application Server, WebSphere MQ and the IBM DB2 database.

“This will enable customers to quickly access the software images needed for containerization, and gain confidence in those images through the promises of container certification,” he said in his blog post.

Docker support for Kubernetes

In other Docker news, Docker announced it is integrating the Kubernetes container orchestration system into the Docker platform. This means Kubernetes will be an option right alongside Docker’s own Swarm container orchestration system. Users will have the choice of using Kubernetes or Swarm.

“Support for Kubernetes in addition to the Docker Enterprise Edition capabilities, including security, flexibility and enterprise-grade capabilities across a variety of clouds, Linux distributions and Windows, should appeal to enterprises seeking to centrally manage container applications and speed ROI,” said Jay Lyman, principal analyst at 451 Research, in a statement.

Docker’s routing mesh available with Windows Server version 1709

The Windows Core Networking team, along with our friends at Docker, are thrilled to announce that support for Docker’s ingress routing mesh will be supported with Windows Server version 1709.

Ingress routing mesh is part of swarm mode–Docker’s built-in orchestration solution for containers. Swarm mode first became available on Windows early this year, along with support for the Windows overlay network driver. With swarm mode, users have the ability to create container services and deploy them to a cluster of container hosts. With this, of course, also comes the ability to define published ports for services, so that the apps that those services are running can be accessed by endpoints outside of the swarm cluster (for example, a user might want to access a containerized web service via web browser from their laptop or phone).

To place routing mesh in context, it’s useful to understand that Docker currently provides it, along with another option for publishing services with swarm mode–host mode service publishing:*

  • Host mode is an approach to service publishing that’s optimal for production environments, where system administrators value maximum performance and full control over their container network configuration. With host mode, each container of a service is published directly to the host where it is running.
  • Routing mesh is an approach to service publishing that’s optimized for the developer experience, or for production cases where a simple configuration experience is valued above performance, or control over how incoming requests are routed to the specific replicas/containers for a service. With ingress routing mesh, the containers for a published service, can all be accessed through a single “swarm port”–one port, published on every swarm host (even the hosts where no container for the service is currently running!).

While our support for routing mesh is new with Windows Server version 1709, host mode service publishing has been supported since swarm mode was originally made available on Windows. 

*For more information, on how host mode and routing mesh work, visit Docker’s documentation on routing mesh and publishing services with swarm mode.

So, what does it take to use routing mesh on Windows? Routing mesh is Docker’s default service publishing option. It has always been the default behavior on Linux, and now it’s also supported as the default on Windows! This means that all you need to do to use routing mesh, is create your services using the --publish flag to the docker service create option, as described in Docker’s documentation.

For example, assume you have a basic web service, defined by a container image called, web-frontend. If you wanted to publish this service to port 80 of each container and port 8080 of all of your swarm nodes, you’d create the service with a command like this:

C:> docker service create --name web --replicas 3 --publish 8080:80 web-frontend

In this case, the web app, running on a pre-configured swarm cluster along with a db backend service, might look like the app depicted below. As shown, because of routing mesh clients outside of the swarm cluster (in this example, web browsers) are able to access the web service via its published port–8080. And in fact, each client can access the web service via its published port on any swarm host; no matter which host receives an original incoming request, that host will use routing mesh to route the request to a web container instance that can ultimately service that request.

Once again, we at Microsoft and our partners at Docker are proud to make ingress mode available to you on Windows. Try it out on Windows Server version 1709, and using Docker EE Preview*, and let us know what you think! We appreciate your engagement and support in making features like routing mesh possible, and we encourage you to continue reaching out with feedback. Please provide your questions/comments/feature requests by posting issues to the Docker for Windows GitHub repo or by emailing the Windows Core Networking team directly, at sdn_feedback@microsoft.com.

*Note: Ingress mode on Windows currently has the following system requirements:

Delivering Safer Apps with Windows Server 2016 and Docker Enterprise Edition

Windows Server 2016 and Docker Enterprise Edition are revolutionizing the way Windows developers can create, deploy, and manage their applications on-premises and in the cloud. Microsoft and Docker are committed to providing secure containerization technologies and enabling developers to implement security best practices in their applications. This blog post highlights some of the security features in Docker Enterprise Edition and Windows Server 2016 designed to help you deliver safer applications.

For more information on Docker and Windows Server 2016 Container security, check out the full whitepaper on Docker’s site.

Introduction

Today, many organizations are turning to Docker Enterprise Edition (EE) and Windows Server 2016 to deploy IT applications consistently and efficiently using containers. Container technologies can play a pivotal role in ensuring the applications being deployed in your enterprise are safe — free of malware, up-to-date with security patches, and known to come from a trustworthy source. Docker EE and Windows each play a hand in helping you develop and deploy safer applications according to the following three characteristics:

  1. Usable Security: Secure defaults with tooling that is native to both developers and operators.
  2. Trusted Delivery: Everything needed to run an application is delivered safely and guaranteed not to be tampered with.
  3. Infrastructure Independent: Application and security configurations are portable and can move between developer workstations, testing environments, and production deployments regardless of whether those environments are running in Azure or your own datacenter.

Usable Security

Resource Isolation

Windows Server 2016 ships with support for Windows Server Containers, which are powered by Docker Enterprise Edition. Docker EE for Windows Server is the result of a joint engineering effort between Microsoft and Docker. When you run a Windows Server Container, key system resources are sandboxed for each container and isolated from the host operating system. This means the container does not see the resources available on the host machine, and any changes made within the container will not affect the host or other containers. Some of the resources that are isolated include:

  • File system
  • Registry
  • Certificate stores
  • Namespace (privileged API access, system services, task scheduler, etc.)
  • Local users and groups

Additionally, you can limit a Windows Server Container’s use of the CPU, memory, disk usage, and disk throughput to protect the performance of other applications and containers running on the same host.

Hyper-V Isolation

For even greater isolation, Windows Server Containers can be deployed using Hyper-V isolation. In this configuration, the container runs inside a specially optimized Hyper-V virtual machine with a completely isolated Windows kernel instance. Docker EE handles creating, managing, and deleting the VM for you. Better yet, the same Docker container images can be used for both process isolated and Hyper-V isolated containers, and both types of containers can run side by side on the same host.

Application Secrets

Starting with Docker EE 17.06, support for delivering secrets to Windows Server Containers at runtime is now available. Secrets are simply blobs of data that may contain sensitive information best left out of a container image. Common examples of secrets are SSL/TLS certificates, connection strings, and passwords.

Developers and security operators use and manage secrets in the exact same way — by registering them on manager nodes (in an encrypted store), granting applicable services access to obtain the secrets, and instructing Docker to provide the secret to the container at deployment time. Each environment can use unique secrets without having to change the container image. The container can just read the secrets at runtime from the file system and use them for their intended purposes.

Trusted Delivery

Image Signing and Verification

Knowing that the software running in your environment is authentic and came from a trusted source is critical to protecting your information assets. With Docker Content Trust, which is built into Docker EE, container images are cryptographically signed to record the contents present in the image at the time of signing. Later, when a host pulls the image down, it will validate the signature of the downloaded image and compare it to the expected signature from the metadata. If the two do not match, Docker EE will not deploy the image since it is likely that someone tampered with the image.

Image Scanning and Antimalware

Beyond checking if an image has been modified, it’s important to ensure the image doesn’t contain malware of libraries with known vulnerabilities. When images are stored in Docker Trusted Registry, Docker Security Scanning can analyze images to identify libraries and components in use that have known vulnerabilities in the Common Vulnerabilities and Exposures (CVE) database.

Further, when the image is pulled on a Windows Server 2016 host with Windows Defender enabled, the image will automatically be scanned for malware to prevent malicious software from being distributed through container images.

Windows Updates

Working alongside Docker Security Scanning, Microsoft Windows Update can ensure that your Windows Server operating system is up to date. Microsoft publishes two pre-built Windows Server base images to Docker Hub: microsoft/nanoserver and microsoft/windowsservercore. These images are updated the same day as new Windows security updates are released. When you use the “latest” tag to pull these images, you can rest assured that you’re working with the most up to date version of Windows Server. This makes it easy to integrate updates into your continuous integration and deployment workflow.

Infrastructure Independent

Active Directory Service Accounts

Windows workloads often rely on Active Directory for authentication of users to the application and authentication between the application itself and other resources like Microsoft SQL Server. Windows Server Containers can be configured to use a Group Managed Service Account when communicating over the network to provide a native authentication experience with your existing Active Directory infrastructure. You can select a different service account (even belonging to a different AD domain) for each environment where you deploy the container, without ever having to update the container image.

Docker Role Based Access Control

Docker Enterprise Edition allows administrators to apply fine-grained role based access control to a variety of Docker primitives, including volumes, nodes, networks, and containers. IT operators can grant users predefined permission roles to collections of Docker resources. Docker EE also provides the ability to create custom permission roles, providing IT operators tremendous flexibility in how they define access control policies in their environment.

Conclusion

With Docker Enterprise Edition and Windows Server 2016, you can develop, deploy, and manage your applications more safely using the variety of built-in security features designed with developers and operators in mind. To read more about the security features available when running Windows Server Containers with Docker Enterprise Edition, check out the full whitepaper and learn more about using Docker Enterprise Edition in Azure.

Azure Log Analytics – Container Monitoring Solution general availability, CNCF Landscape

Docker container is an emerging technology to help developers and devops with easy provisioning and continuous delivery in modern infrastructure. As containers can be ubiquitous in an environment, monitoring is essential. We’ve developed a monitoring solution which provides deep insights into containers supporting Kubernetes, Docker Swarm, Mesos DC/OS, and Service Fabric container orchestrators on multiple OS platforms. We are excited to announce the general availability for the Container Monitoring management solution on Azure Log Analytics, available in the Azure Marketplace today. 

“Every community contribution helps DC/OS become a better platform for running modern applications, and the addition of Azure Log Analytics Container Monitoring Solution into DC/OS Universe is a meaningful contribution, indeed,” said Ravi Yadav, Technical Partnership Lead at Mesosphere. “DC/OS users are running a lot of Docker containers, and having the option to manage them with a tool like Azure Log Analytics Container Monitoring Solution will result in a richer user experience.”

Microsoft recently joined the Cloud Native Computing Foundation (CNCF) and we continue to invest in open source projects. Azure Log Analytics is now part of the Cloud Native Computing Foundation (CNCF) Landscape under Monitoring Category.

Here’s what the Container Monitoring Solution supports:

With this solution, you can:

  • See information about all container hosts in a single location
  • Know which containers are running, what image they’re running, and where they’re running
  • See an audit trail for actions on containers
  • Troubleshoot by viewing and searching centralized logs without remote login to the Docker hosts
  • Find containers that may be “noisy neighbors” and consuming excess resources on a host
  • View centralized CPU, memory, storage, and network usage and performance information for containers

Diagram

New features available as part of the general availability include:

We’ve added new features to provide better insights to your Kubernetes cluster. With the new features, you can more easily narrow down container issues within a Kubernetes cluster. Now you can use search filters on you own custom pod labels and Kubernetes cluster hierarchies. With container process information, you can quickly see container process status for deeper health analysis. These features are only for Linux—additional Windows features are coming soon.

  • Kubernetes cluster awareness with at-a-glance hierarchy inventory from Kubernetes cluster to pods
  • New Kubernetes events
  • Capture custom pod labels and provides custom complex search filters
  • Provides container process information
  • Container Node Inventory including storage, network, orchestration type, and Docker version

For more information about how to use Container Monitoring solution, as well as the insights you can gather, see Containers solution in Log Analytics.

Learn more by reading previous blogs on Azure Log Analytics Container Monitoring.

How do I try this?

You can get a free subscription for Microsoft Azure so that you can test the Container Monitoring solution features.

How can I give you guys feedback?

There are a few different routes to give feedback:

We plan on enhancing monitoring capabilities for containers. If you have feedback or questions, please feel free to contact us!

Use NGINX to load balance across your Docker Swarm cluster

A practical walkthrough, in six steps

This basic example demonstrates NGINX and swarm mode in action, to provide the foundation for you to apply these concepts to your own configurations.

This document walks through several steps for setting up a containerized NGINX server and using it to load balance traffic across a swarm cluster. For clarity, these steps are designed as an end-to-end tutorial for setting up a three node cluster and running two docker services on that cluster; by completing this exercise, you will become familiar with the general workflow required to use swarm mode and to load balance across Windows Container endpoints using an NGINX load balancer.

The basic setup

This exercise requires three container hosts–two of which will be joined to form a two-node swarm cluster, and one which will be used to host a containerized NGINX load balancer. In order to demonstrate the load balancer in action, two docker services will be deployed to the swarm cluster, and the NGINX server will be configured to load balance across the container instances that define those services. The services will both be web services, hosting simple content that can be viewed via web browser. With this setup, the load balancer will be easy to see in action, as traffic is routed between the two services each time the web browser view displaying their content is refreshed.

The figure below provides a visualization of this three-node setup. Two of the nodes, the “Swarm Manager” node and the “Swarm Worker” node together form a two-node swarm mode cluster, running two Docker web services, “S1” and “S2”. A third node (the “NGINX Host” in the figure) is used to host a containerized NGINX load balancer, and the load balancer is configured to route traffic across the container endpoints for the two container services. This figure includes example IP addresses and port numbers for the two swarm hosts and for each of the six container endpoints running on the hosts.

configuration

System requirements

Three* or more computer systems running Windows 10 Creators Update (available today for members of the Windows Insiders program), setup as a container host (see the topic, Windows Containers on Windows 10 for more details on how to get started with Docker containers on Windows 10).

Additionally, each host system should be configured with the following:

  • The microsoft/windowsservercore container image
  • Docker Engine v1.13.0 or later
  • Open ports: Swarm mode requires that the following ports be available on each host.
    • TCP port 2377 for cluster management communications
    • TCP and UDP port 7946 for communication among nodes
    • TCP and UDP port 4789 for overlay network traffic

*Note on using two nodes rather than three:
These instructions can be completed using just two nodes. However, currently there is a known bug on Windows which prevents containers from accessing their hosts using localhost or even the host’s external IP address (for more background on this, see Caveats and Gotchas below). This means that in order to access docker services via their exposed ports on the swarm hosts, the NGINX load balancer must not reside on the same host as any of the service container instances.
Put another way, if you use only two nodes to complete this exercise, one of them will need to be dedicated to hosting the NGINX load balancer, leaving the other to be used as a swarm container host (i.e. you will have a single-host swarm cluster and a host dedicated to your containerized NGINX load balancer).

Step 1: Build an NGINX container image

In this step, we’ll build the container image required for your containerized NGINX load balancer. Later we will run this image on the host that you have designated as your NGINX container host.

Note: To avoid having to transfer your container image later, complete the instructions in this section on the container host that you intend to use for your NGINX load balancer.

NGINX is available for download from nginx.org. An NGINX container image can be built using a simple Dockerfile that installs NGINX onto a Windows base container image and configures the container to run as an NGINX executable. For the purpose of this exercise, I’ve made a Dockerfile downloadable from my personal GitHub repo; access the NGINX Dockerfile here, then save it to some location (e.g. C:tempnginx) on your NGINX container host machine. From that location, build the image using the following command:

C:tempnginx> docker build -t nginx .

Now the image should appear with the rest of the docker images on your system (you can check this using the docker images command).

(Optional) Confirm that your NGINX image is ready

First, run the container:

C:temp> docker run -it -p 80:80 nginx

Next, open a new cmdlet window and use the docker ps command to see that the container is running. Note its ID. The ID of your container is the value of <CONTAINERID> in the next command.

Get the container’s IP address:

C:temp> docker exec <CONTAINERID> ipconfig

For example, your container’s IP address may be 172.17.176.155, as in the example output shown below.

nginxipconfig

Next, open a browser on your container host and put your container’s IP address in the address bar. You should see a confirmation page, indicating that NGINX is successfully running in your container.

nginxconfirmation

 

Step 2: Build images for two containerized IIS Web services

In this step, we’ll build container images for two simple IIS-based web applications. Later, we’ll use these images to create two docker services.

Note: Complete the instructions in this section on one of the container hosts that you intend to use as a swarm host.

Build a generic IIS Web Server image

On my personal GitHub repo, I have made a Dockerfile available for creating an IIS Web server image. The Dockerfile simply enables the Internet Information Services (IIS) Web server role within a microsoft/windowsservercore container. Download the Dockerfile from here, and save it to some location (e.g. C:tempiis) on one of the host machines that you plan to use as a swarm node. From that location, build the image using the following command:

 C:tempiis> docker build -t iis-web .

(Optional) Confirm that your IIS Web server image is ready

First, run the container:

 C:temp> docker run -it -p 80:80 iis-web

Next, use the docker ps command to see that the container is running. Note its ID. The ID of your container is the value of <CONTAINERID> in the next command.

Get the container’s IP address:

C:temp> docker exec <CONTAINERID> ipconfig

Now open a browser on your container host and put your container’s IP address in the address bar. You should see a confirmation page, indicating that the IIS Web server role is successfully running in your container.

iisconfirmation

Build two custom IIS Web server images

In this step, we’ll be replacing the IIS landing/confirmation page that we saw above with custom HTML pages–two different pages, corresponding to two different web container images. In a later step, we’ll be using our NGINX container to load balance across instances of these two images. Because the images will be different, we will easily see the load balancing in action as it shifts between the content being served by the containers instances of the two images.

First, on your host machine create a simple file called, index_1.html. In the file type any text. For example, your index_1.html file might look like this:

index1

Now create a second file, index_2.html. Again, in the file type any text. For example, your index_2.html file might look like this:

index2

Now we’ll use these HTML documents to make two custom web service images.

If the iis-web container instance that you just built is not still running, run a new one, then get the ID of the container using:

C:temp> docker exec <CONTAINERID> ipconfig

Now, copy your index_1.html file from your host onto the IIS container instance that is running, using the following command:

C:temp> docker cp index_1.html <CONTAINERID>:C:inetpubwwwrootindex.html

Next, stop and commit the container in its current state. This will create a container image for the first web service. Let’s call this first image, “web_1.”

C:> docker stop <CONTAINERID>
C:> docker commit <CONTAINERID> web_1

Now, start the container again and repeat the previous steps to create a second web service image, this time using your index_2.html file. Do this using the following commands:

C:> docker start <CONTAINERID>
C:> docker cp index_2.html <CONTAINERID>:C:inetpubwwwrootindex.html
C:> docker stop <CONTAINERID>
C:> docker commit <CONTAINERID> web_2

You have now created images for two unique web services; if you view the Docker images on your host by running docker images, you should see that you have two new container images—“web_1” and “web_2”.

Put the IIS container images on all of your swarm hosts

To complete this exercise you will need the custom web container images that you just created to be on all of the host machines that you intend to use as swarm nodes. There are two ways for you to get the images onto additional machines:

  • Option 1: Repeat the steps above to build the “web_1” and “web_2” containers on your second host.
  • Option 2 [recommended]: Push the images to your repository on Docker Hub then pull them onto additional hosts.

A note on Docker Hub:
Using Docker Hub is a convenient way to leverage the lightweight nature of containers across all of your machines, and to share your images with others. Visit the following Docker resources to get started with pushing/pulling images with Docker Hub:
Create a Docker Hub account and repository
Tag, push and pull your image

 

Step 3: Join your hosts to a swarm

As a result of the previous steps, one of your host machines should have the nginx container image, and the rest of your hosts should have the Web server images, “web_1” and “web_2”. In this step, we’ll join the latter hosts to a swarm cluster.

Note: The host running the containerized NGINX load balancer cannot run on the same host as any container endpoints for which it is performing load balancing; the host with your nginx container image must be reserved for load balancing only. For more background on this, see Caveats and Gotchas below.

First, run the following command from any machine that you intend to use as a swarm host. The machine that you use to execute this command will become a manager node for your swarm cluster.

  • Replace <HOSTIPADDRESS> with the public IP address of your host machine
C:temp> docker swarm init --advertise-addr=<HOSTIPADDRESS> --listen-addr <HOSTIPADDRESS>:2377

Now run the following command from each of the other host machines that you intend to use as swarm nodes, joining them to the swarm as a worker nodes.

  • Replace <MANAGERIPADDRESS> with the public IP address of your host machine (i.e. the value of <HOSTIPADDRESS> that you used to initialize the swarm from the manager node)
  • Replace <WORKERJOINTOKEN> with the worker join-token provided as output by the docker swarm init command (you can also obtain the join-token by running docker swarm join-token worker from the manager host)
C:temp> docker swarm join --token <WORKERJOINTOKEN> <MANAGERIPADDRESS>:2377

Your nodes are now configured to form a swarm cluster! You can see the status of the nodes by running the following command from your manage node:

C:temp> docker node ls

Step 4: Deploy services to your swarm

Note: Before moving on, stop and remove any NGINX or IIS containers running on your hosts. This will help avoid port conflicts when you define services. To do this, simply run the following commands for each container, replacing <CONTAINERID> with the ID of the container you are stopping/removing:

C:temp> docker stop <CONTAINERID>
C:temp> docker rm <CONTAINERID>

Next, we’re going to use the “web_1” and “web_2” container images that we created in previous steps of this exercise to deploy two container services to our swarm cluster.

To create the services, run the following commands from your swarm manager node:

C: > docker service create --name=s1 --publish mode=host,target=80 --endpoint-mode dnsrr web_1 powershell -command {echo sleep; sleep 360000;}
C: > docker service create --name=s2 --publish mode=host,target=80 --endpoint-mode dnsrr web_2 powershell -command {echo sleep; sleep 360000;}

You should now have two services running, s1 and s2. You can view their status by running the following command from your swarm manager node:

C: > docker service ls

Additionally, you can view information on the container instances that define a specific service with the following commands (where <SERVICENAME> is replaced with the name of the service you are inspecting (for example, s1 or s2):

# List all services
C: > docker service ls
# List info for a specific service
C: > docker service ps <SERVICENAME>

(Optional) Scale your services

The commands in the previous step will deploy one container instance/replica for each service, s1 and s2. To scale the services to be backed by multiple replicas, run the following command:

C: > docker service scale <SERVICENAME>=<REPLICAS>
# e.g. docker service scale s1=3

Step 5: Configure your NGINX load balancer

Now that services are running on your swarm, you can configure the NGINX load balancer to distribute traffic across the container instances for those services.

Of course, generally load balancers are used to balance traffic across instances of a single service, not multiple services. For the purpose of clarity, this example uses two services so that the function of the load balancer can be easily seen; because the two services are serving different HTML content, we’ll clearly see how the load balancer is distributing requests between them.

The nginx.conf file

First, the nginx.conf file for your load balancer must be configured with the IP addresses and service ports of your swarm nodes and services. An example nginx.conf file was included with the NGINX download that was used to create your nginx container image in Step 1. For the purpose of this exercise, I copied and adapted the example file provided by NGINX and used it to create a simple template for you to adapt with your specific node/container information.

Download the nginx.conf file template that I prepared for this exercise from my personal GitHub repo, and save it onto your NGINX container host machine. In this step, we’ll adapt the template file and use it to replace the default nginx.conf file that was originally downloaded onto your NGINX container image.

You will need to adjust the file by adding the information for your hosts and container instances. The template nginx.conf file provided contains the following section:

upstream appcluster {
     server <HOSTIP>:<HOSTPORT>;
     server <HOSTIP>:<HOSTPORT>;
     server <HOSTIP>:<HOSTPORT>;
     server <HOSTIP>:<HOSTPORT>;
     server <HOSTIP>:<HOSTPORT>;
     server <HOSTIP>:<HOSTPORT>;
 }

To adapt the file for your configuration, you will need to adjust the <HOSTIP>:<HOSTPORT> entries in the template config file. You will have an entry for each container endpoint that defines your web services. For any given container endpoint, the value of <HOSTIP> will be the IP address of the container host upon which that container is running. The value of <HOSTPORT> will be the port on the container host upon which the container endpoint has been published.

When the services, s1 and s2, were defined in the previous step of this exercise, the --publish mode=host,target=80 parameter was included. This parameter specified that the container instances for the services should be exposed via published ports on the container hosts. More specifically, by including --publish mode=host,target=80 in the service definitions, each service was configured to be exposed on port 80 of each of its container endpoints, as well as a set of automatically defined ports on the swarm hosts (i.e. one port for each container running on a given host).

First, identify the host IPs and published ports for your container endpoints

Before you can adjust your nginx.conf file, you must obtain the required information for the container endpoints that define your services. To do this, run the following commands (again, run these from your swarm manager node):

C: > docker service ps s1
C: > docker service ps s2

The above commands will return details on every container instance running for each of your services, across all of your swarm hosts.

  • One column of the output, the “ports” column, includes port information for each host of the form *:<HOSTPORT>->80/tcp. The values of <HOSTPORT> will be different for each container instance, as each container is published on its own host port.
  • Another column, the “node” column, will tell you which machine the container is running on. This is how you will identify the host IP information for each endpoint.

You now have the port information and node for each container endpoint. Next, use that information to populate the upstream field of your nginx.conf file; for each endpoint, add a server to the upstream field of the file, replacing the field with the IP address of each node (if you don’t have this, run ipconfig on each swarm host machine to obtain it), and the field with the corresponding host port.

For example, if you have two swarm hosts (IP addresses 172.17.0.10 and 172.17.0.11), each running three containers your list of servers will end up looking something like this:

upstream appcluster {
     server 172.17.0.10:21858;
     server 172.17.0.11:64199;
     server 172.17.0.10:15463;
     server 172.17.0.11:56049;
     server 172.17.0.11:35953;
     server 172.17.0.10:47364;
}

Once you have changed your nginx.conf file, save it. Next, we’ll copy it from your host to the NGINX container image itself.

Replace the default nginx.conf file with your adjusted file

If your nginx container is not already running on its host, run it now:

C:temp> docker run -it -p 80:80 nginx

Next, open a new cmdlet window and use the docker ps command to see that the container is running. Note its ID. The ID of your container is the value of <CONTAINERID> in the next command.

Get the container’s IP address:

C:temp> docker exec <CONTAINERID> ipconfig

With the container running, use the following command to replace the default nginx.conf file with the file that you just configured (run the following command from the directory in which you saved your adjusted version of the nginx.conf on the host machine):

C:temp> docker cp nginx.conf <CONTAINERID>:C:nginxnginx-1.10.3conf

Now use the following command to reload the NGINX server running within your container:

C:temp> docker exec <CONTAINERID> nginx.exe -s reload

Step 6: See your load balancer in action

Your load balancer should now be fully configured to distribute traffic across the various instances of your swarm services. To see it in action, open a browser and

  • If accessing from the NGINX host machine: Type the IP address of the nginx container running on the machine into the browser address bar. (This is the value of <CONTAINERID> above).
  • If accessing from another host machine (with network access to the NGINX host machine): Type the IP address of the NGINX host machine into the browser address bar.

Once you’ve typed the applicable address into the browser address bar, press enter and wait for the web page to load. Once it loads, you should see one of the HTML pages that you created in step 2.

Now press refresh on the page. You may need to refresh more than once, but after just a few times you should see the other HTML page that you created in step 2.

If you continue refreshing, you will see the two different HTML pages that you used to define the services, web_1 and web_2, being accessed in a round-robin pattern (round-robin is the default load balancing strategy for NGINX, but there are others). The animated image below demonstrated the behavior that you should see.

As a reminder, below is the full configuration with all three nodes. When you’re refreshing your web page view, you’re repeatedly accessing the NGINX node, which is distributing your GET request to the container endpoints running on the swarm nodes. Each time you resend the request, the load balancer has the opportunity to route you to a different endpoint, resulting in your being served a different web page, depending on whether your request was routed to an S1 or S2 endpoint.

configuration_full

Caveats and gotchas

Q: Is there a way to publish a single port for my service, so that I can load balance across each of my services rather than each of the individual endpoints for my services?

Unfortunately, we do not yet support publishing a single port for a service on Windows. This feature is swarm mode’s routing mesh feature—a feature that allows you to publish ports for a service, so that that service is accessible to external resources via that port on every swarm node.

Routing mesh for swarm mode on Windows is not yet supported, but will be coming soon.

 

Q: Why can’t I run my containerized load balancer on one of my swarm nodes?

Currently, there is a known bug on Windows, which prevents containers from accessing their hosts using localhost or even the host’s external IP address. This means containers cannot access their host’s exposed ports—the can only access exposed ports on other hosts.

In the context of this exercise, this means that the NGINX load balancer must be running on its own host, and never on the same host as any services that it needs to access via exposed ports. Put another way, for the containerized NGINX load balancer to balance across the two web services defined in this exercise, s1 and s2, it cannot be running on a swarm node—if it were running on a swarm node, it would be unable to access any containers on that node via host exposed ports.

Of course, an additional caveat here is that containers do not need to be accessed via host exposed ports. It is also possible to access containers directly, using the container IP and published port. If this instead were done for this exercise, the NGINX load balancer would need to be configured to:

  • Access containers that share its host by their container IP and port
  • Access containers that do not share its host by their host’s IP and exposed port

There is no problem with configuring the load balancer in this way, other than the added complexity that it introduces compared to simply putting the load balancer on its own machine, so that containers can be uniformly accessed via their host’s IPs and exposed ports.

Overlay Network Driver with Support for Docker Swarm Mode Now Available to Windows Insiders on Windows 10

Windows 10 Insiders can now take advantage of overlay networking and Docker swarm mode  to manage containerized applications in both single-host and clustering scenarios.

Containers are a rapidly growing technology, and as they evolve so must the technologies that support them as members of a broader collection of compute, storage and networking infrastructure components. For networking, in particular, this means continually striving to achieve better connectivity, higher reliability and easier management for container networking. Less than six months ago, Microsoft released Windows 10 Anniversary Edition and Windows Server 2016, and even as our first versions of Windows with container support were being celebrated we were already hard at work on new container features, including several container networking features.

Our last Windows release showcased Docker Compose and service discovery—two key features for single-host container deployment and networking scenarios. Now, we’re expanding the reach of Windows container networking to multi-host (clustering) scenarios with the addition of a native overlay network driver and support for Docker swarm mode, available today to Windows Insiders as part of the upcoming Windows 10, Creators Update.

Docker swarm mode is Docker’s native orchestration tool, designed to simplify the experiencing of declaring, managing and scaling container services. The Windows overlay network driver (which uses VXLAN and virtual overlay networking technology) makes it possible to connect container endpoints running on separate hosts to the same, isolated network. Together, swarm mode and overlay enable easy management and complete scalability of your containerized applications, allowing you to leverage the full power of your infrastructure hosts.

What is “swarm mode”?

Swarm mode is a Docker feature that provides built in container orchestration capabilities, including native clustering of Docker hosts and scheduling of container workloads. A group of Docker hosts form a “swarm” cluster when their Docker engines are running together in “swarm mode.”

A swarm is composed of two types of container hosts: manager nodes, and worker nodes. Every swarm is initialized via a manager node, and all Docker CLI commands for controlling and monitoring a swarm must be executed from one of its manager nodes. Manager nodes can be thought of as “keepers” of the Swarm state—together, they form a consensus group that maintains awareness of the state of services running on the swarm, and it’s their job to ensure that the swarm’s actual state always matches its intended state, as defined by the developer or admin.

Note: Any given swarm can have multiple manager nodes, but it must always have at least one.

Worker nodes are orchestrated by Docker swarm via manager nodes. To join a swarm, a worker node must use a “join token” that was generated by the manager node when the swarm was initialized. Worker nodes simply receive and execute tasks from manager nodes, and so they require (and possess) no awareness of the swarm state.

swarmoverlayfunctionalview

Figure 1: A four-node swarm cluster running two container services on isolated overlay networks.

Figure 1 offers a simple visualization of a four-node cluster running in swarm mode, leveraging the overlay network driver. In this swarm, Host A is the manager node and Hosts B-D are worker nodes. Together, these manager and worker nodes are running two Docker services which are backed by a total of ten container instances, or “replicas.” The yellow in this figure distinguishes the first service, Service 1; the containers for Service 1 are connected by an overlay network. Similarly, the blue in this figure represents the second service, Service 2; the containers for Service 2 are also attached by an overlay network.

Note: In this case, the two Docker services happen to be connected by separate/isolated overlay networks. It is also possible, however, for multiple container services to be attached to the same overlay network.

Windows Network Stack Implementation

Under the covers, Swarm and overlay are enabled by enhancements to the Host Network Service (HNS) and Windows libnetwork plugin for the Docker engine, which leverage the Azure Virtual Filtering Platform (VFP) forwarding extension in the Hyper-V Virtual Switch. Figure 2 shows how these components work together on a given Windows container host, to enable overlay and swarm mode functionality.

Figure 2: Key components involved in enabling swarm mode and overlay networking on Windows container hosts.

Figure 2: Key components involved in enabling swarm mode and overlay networking on Windows container hosts.

The HNS overlay network driver plugin and VFP forwarding extension

Overlay networking was enabled with the addition of an overlay network driver plugin to the HNS service, which creates encapsulation rules using the VFP forwarding extension in the Hyper-V Virtual Switch; the HNS overlay plugin communicates with the VFP forwarding extension to perform the VXLAN encapsulation required to enable overlay networking functionality.

On Windows, the Azure Virtual Filtering Platform (VFP) is a software defined networking (SDN) element, installed as a programmable Hyper-V Virtual Switch forwarding extension. It is a shared component with the Azure platform, and was added to Windows 10 with Windows 10 Anniversary Edition. It is designed as a high performance, rule-flow based engine, to specify per-endpoint rules for forwarding, transforming, or blocking network traffic. The VFP extension has been used for implementing the l2bridge and l2tunnel Windows container networking modes and is now also used to implement the overlay networking mode. As we continue to expand container networking capabilities on Windows, we plan to further leverage the VFP extension to enable more fine-grained policy.

Enhancements to the Windows libnetwork plugin

Overlay networking support was the main hurdle that needed to be overcome to achieve Docker swarm mode support on Windows. Aside from that, additions also needed to be made to the Windows libnetwork Plugin—the plugin to the Docker engine that enables container networking functionality on Windows by facilitating communication between the Docker engine and the HNS service.

Load balancing: Windows routing mesh coming soon

Currently, Windows supports DNS Round-Robin load balancing between services. The routing mesh for Windows Docker hosts is not yet supported, but will be coming soon. Users seeking an alternative load balancing strategy today can setup an external load balancer (e.g. NGINX) and use Swarm’s publish-port mode to expose container host ports over which to load balance.

Boost your DevOps cycle and manage containers across Windows hosts by leveraging Docker swarm mode today

Together, Docker Swarm and support for overlay container networks enable multi-host scenarios and rapid scalability of your Windows containerized applications and services. This new support, combined with service discovery and the rest of the capabilities that you are used to leveraging in single-host configurations, makes for a clean and straight-forward experience developing containerized apps on Windows for multi-host environments.

To get started with Docker Swarm and overlay networking on Windows, start here .

The Datacenter and Cloud Networking team worked alongside our partners internally and at Docker to bring overlay networking mode and Docker swarm mode support to Windows. Again, this is an exciting milestone in our ongoing work to achieve better container networking support to Windows users. We’re constantly seeking more ways to improve your experience working with containers on Windows, and it’s only with your feedback that we can best decide what to do next to enable you and your DevOps teams.

We encourage you to share your experiences, questions and feedback with us, to help us learn more about what you’re doing with container networking on Windows today, and to understand what you’d like to achieve in the future. Visit our Contact Page to learn more about the forums that you can use to be in touch with us.

Introducing the Host Compute Service (HCS)

Summary

This post introduces a low level container management API in Hyper-V called the Host Compute Service (HCS).  It tells the story behind its creation, and links to a few open source projects that make it easier to use.

Motivation and Creation

Building a great management API for Docker was important for Windows Server Containers.  There’s a ton of really cool low-level technical work that went into enabling containers on Windows, and we needed to make sure they were easy to use.  This seems very simple, but figuring out the right approach was surprisingly tricky.

Our first thought was to extend our existing management technologies (e.g. WMI, PowerShell) to containers.  After investigating, we concluded that they weren’t optimal for Docker, and started looking at other options.

Next, we considered mirroring the way Linux exposes containerization primitives (e.g. control groups, namespaces, etc.).  Under this model, we could have exposed each underlying feature independently, and asked Docker to call into them individually.  However, there were a few questions about that approach that caused us to consider alternatives:

  1. The low level APIs were evolving (and improving) rapidly.  Docker (and others) wanted those improvements, but also needed a stable API to build upon.  Could we stabilize the underlying features fast enough to meet our release goals?
  2. The low level APIs were interesting and useful because they made containers possible.  Would anyone actually want to call them independently?

After a bit of thinking, we decided to go with a third option.  We created a new management service called the Host Compute Service (HCS), which acts as a layer of abstraction above the low level functionality.  The HCS was a stable API Docker could build upon, and it was also easier to use.  Making a Windows Server Container with the HCS is just a single API call.  Making a Hyper-V Container instead just means adding a flag when calling into the API.  Figuring out how those calls translate into actual low-level implementation is something the Hyper-V team has already figured out.

linux-arch windows-arch

Getting Started with the HCS

If you think this is nifty, and would like to play around with the HCS, here’s some infomation to help you get started.  Instead of calling our C API directly, I recommend using one the friendly wrappers we’ve built around the HCS.  These wrappers make it easy to call the HCS from higher level languages, and are released open source on GitHub.  They’re also super handy if you want to figure out how to use the C API.  We’ve released two wrappers thus far.  One is written in Go (and used by Docker), and the other is written in C#.

You can find the wrappers here:

If you want to use the HCS (either directly or via a wrapper), or you want to make a Rust/Haskell/InsertYourLanguage wrapper around the HCS, please drop a comment below.  I’d love to chat.

For a deeper look at this topic, I recommend taking a look at John Stark’s DockerCon presentation: https://www.youtube.com/watch?v=85nCF5S8Qok

John Slack
Program Manager
Hyper-V Team

Use Docker Compose and Service Discovery on Windows to scale-out your multi-service container application

Article by Kallie Bracken and Jason Messer

The containers revolution popularized by Docker has come to Windows so that developers on Windows 10 (Anniversary Edition) or IT Pros using Windows Server 2016 can rapidly build, test, and deploy Windows “containerized” applications!

Based on community feedback, we have made several improvements to the Windows containers networking stack to enable multi-container, multi-service application scenarios. Support for Service Discovery and the ability to create (or re-use existing) networks are at the center of the improvements that were made to bring the efficiency of Docker Compose to Windows. Docker Compose enables developers to instantly build, deploy and scale-out their “containerized” applications running in Windows containers with just a few simple commands. Developers define their application using a ‘Compose file’ to specify the services, corresponding container images, and networking infrastructure required to run their application. Service Discovery itself is a key requirement to scale-out multi-service applications using DNS-based load-balancing and we are proud to announce support for Service Discovery in the most recent versions of Windows 10 and Windows Server 2016.

Take your next step in mastering development with Windows Containers, and keep letting us know what great capabilities you would like to see next!


When it comes to using Docker to manage Windows containers, with just a little background it’s easy to get simple container instances up and running. Once you’ve covered the basics, the next step is to build your own custom container images using Dockerfiles to install features, applications and other configuration layers on top of the Windows base container images. From there, the next step is to get your hands dirty building multi-tier applications, composed of multiple services running in multiple container instances. It’s here—in the modularization and scaling-out of your application—that Docker Compose comes in; Compose is the perfect tool for streamlining the specification and deployment of multi-tier, multi-container applications. Docker Compose registers each container instance by service name through the Docker engine thereby allowing containers to ‘discover’ each other by name when sending intra-application network traffic. Application services can also be scaled-out to multiple container instances using Compose. Network traffic destined to a multi-container service is then round-robin’d using DNS load-balancing across all container instances implementing that service.

This post walks through the process of creating and deploying a multi-tier blog application using Docker Compose (Compose file and application shown in Figure 1).

ComposeFile

Figure 1: The Compose File used to create the blog application, including its BlogEngine.NET front-end (the ‘web’ service) and SQL Server back-end (the ‘db’ service).

Note: Docker Compose can be used to scale-out applications on a single host which is the scope of this post. To scale-out your ‘containerized’ application across multiple hosts, the application should be deployed on a multi-node cluster using a tool such as Docker Swarm. Look for multi-host networking support in Docker Swarm on Windows in the near future.

The first tier of the application is an ASP.NET web app, BlogEngine.NET, and the back-end tier is a database built on SQL Server Express 2014. The database is created to manage and store blog posts from different users which are subsequently displayed through the Blog Engine app.

New to Docker or Windows Containers?

This post assumes familiarity with the basics of Docker, Windows containers and ‘containerized’ ASP.NET applications. Here are some good places to start if you need to brush up on your knowledge:

Setup

System Prerequisites

Before you walk through the steps described in this post, check that your environment meets the following requirements and has the most recent versions of Docker and Windows updates installed:

  • Windows 10 Anniversary Edition (Professional or Enterprise) or Windows Server 2016
    Windows Containers requires your system to have critical updates installed. Check your OS version by running winver.exe, and ensure you have installed the latest KB 3192366 and/or Windows 10 updates.
  • The latest version of Docker-Compose (available with Docker-for-Windows) must be installed on your system.

NOTE: The current version of Docker Compose on Windows requires that the Docker daemon be configured to listen to a TCP socket for new connections. A Pull Request (PR) to fix for this issue is in review and will be merged soon. For now, please ensure that you do the following:

Please configure the Docker Engine by adding a “hosts” key to the daemon.json file (example shown below) following the instructions here. Be sure to restart the Docker service after making this change.

{
…
"hosts":["tcp://0.0.0.0:2375", “npipe:////./pipe/win_engine"]
…
}

When running docker-compose, you will either need to explicitly reference the host port by adding the option “-H tcp://localhost:2375” to the end of this command (e.g. docker-compose -H “tcp://localhost:2375” or by setting your DOCKER_HOST environment variable to always use this port (e.g. $env:DOCKER_HOST=”tcp://localhost:2375”

Blog Application Source with Compose and Dockerfiles

This blog application is based on the Blog Engine ASP.NET web app availably publicly here: http://www.dnbe.net/docs/.  To follow this post and build the described application, a complete set of files is available on GitHub. Download the Blog Application files from GitHub and extract them to a location somewhere on your machine, e.g. ‘C:build’ directory.

The blog application directory includes:

  • A ‘web’ folder that contains the Dockerfile and resources that you’ll need to build the image for the blog application’s ASP.NET front-end.
  • A ‘db’ folder that contains the Dockerfile and resources that you’ll need to build the blog application’s SQL database back-end.
  • A ‘docker-compose.yml’ file that you will use to build and run the application using Docker Compose.

The top-level of the blog application source folder is the main working directory for the directions in this post. Open an elevated PowerShell session and navigate there now – e.g.

PS C:> cd c:build

The Blog Application Container Images

Database Back-End Tier: The ‘db’ Service

The database back-end Dockerfile is located in the ‘db’ sub-folder of the blog application source files and can be referenced here: The Blog Database Dockerfile. The main function of this Dockerfile is to run two scripts over the Windows Server Core base OS image to define a new database as well as the tables required by the BlogEngine.NET application.

The SQL scripts referenced by the Dockerfile to construct the blog database are included in the ‘db’ folder, and copied from host to container when the container image is created so that they can be run on the container.

BlogEngine.NET Front-End

The BlogEngine.NET Dockerfile is in the ‘web’ sub-folder of the blog application source files.

This Dockerfile refers to a PowerShell script (buildapp.ps1) that does the majority of the work required to configure the web service image. The buildapp.ps1 PowerShell Script obtains the BlogEngine.NET project files using a download link from Codeplex, configures the blog application using the default IIS site, grants full permission over the BlogEngine.NET project files (something that is required by the application) and executes the commands necessary to build an IIS web application from the BlogEngine.NET project files.

After running the script to obtain and configure the BlogEngine.NET web application, the Dockerfile finishes by copying the Web.config file included in the ‘web’ sub-folder to the container, to overwrite the file that was downloaded from Codeplex. The config file provided has been altered to point the ‘web’ service to the ‘db’ back-end service.

Streamlining with Docker Compose

When dealing with only one or two independent containers, it is simple to use the ‘docker run’ command to create and start a container image. However, as soon as an application begins to gain complexity, perhaps by including several inter-dependent services or by deploying multiple instances of any one service, the notion of configuring and running that app “manually” becomes impractical. To simplify the definition and deployment of an application, we can use Docker Compose.

A Compose file is used to define our “containerized” application using two services—a ‘web’ service and a ‘db’ service.  The blog application’s Compose File (available here for reference) defines the ‘web’ service which runs the BlogEngine.NET web front-end tier of the application and the ‘db’ service which runs the SQL Server 2014 Express back-end database tier. The compose file also handles network configuration for the blog application (with both application-level and service-level granularity).

Something to note in the blog application Compose file, is that the ‘expose’ option is used in place of the ‘ports’ option for the ‘db’ service. The ‘ports’ option is analogous to using the ‘-p’ argument in a ‘docker run’ command, and specifies HOST:CONTAINER port mapping for a service. However, this ‘ports’ option specifies a specific container host port to use for the service thereby limiting the service to only one container instance since multiple instances can’t re-use the same host port. The ‘expose’ option, on the other hand, can be used to define the internal container port with a dynamic, external port selected automatically by Docker through the Windows Host Networking Service – HNS. This allows for the creation of multiple container instances to run a single service; where the ‘ports’ option requires that every container instance for a service be mapped as specified, the ‘expose’ option allows Docker Compose to handle port mapping as required for scaled-out scenarios.

The ‘networks’ key in the Compose file specifies the network to which the application services will be connected. In this case, we define the default network for all services to use as external meaning a network will not be created by Docker Compose. The ‘nat’ network referenced is the default NAT network created by the Docker Engine when Docker is originally installed.

‘docker-compose build’

In this step, Docker Compose is used to build the blog application. The Compose file references the Dockerfiles for the ‘web’ and ‘db’ services and uses them to build the container image for each service.

From an elevated PowerShell session, navigate to the top level of the Blog Application directory. For example,

cd C:build

Now use Docker Compose to build the blog application:

docker-compose build

‘docker-compose up’

Now use Docker Compose to run the blog application:

docker-compose up

This will cause a container instance to be run for each application service. Execute the command to see that the blog application is now up and running.

docker-compose ps

You can access the blog application through a browser on your local machine, as described below.

Define Multiple, Custom NAT Networks

In previous Windows Server 2016 technical previews, Windows was limited to a single NAT network per container host. While this is still technically the case, it is possible to define custom NAT networks by segmenting the default NAT network’s large, internal prefix into multiple subnets.

For instance, if the default NAT internal prefix was 172.31.211.0/20, a custom NAT network could be carved out from this prefix. The ‘networks’ section in the Compose file could be replaced with the following:

networks:
  default:
    driver: nat
    ipam:
      driver: default
      config:
      - subnet: 172.31.212.0/24

This would create a user-defined NAT network with a user-defined IP subnet prefix (in this case, 172.31.211.0/24). The ipam option is used to specify this custom IPAM configuration.

Note: Ensure that any custom nat network defined is a subset of the larger nat internal prefix previously created. To obtain your host nat network’s internal prefix, run ‘docker network inspect nat’.

View the Blog Application

Now that the containers for the ‘web’ and ‘db’ services are running, the blog application can be accessed from the local container host using the internal container IP and port (80). Use the command docker inspect <web container instance> to determine this internal IP address.

To access the application, open an internet browser on the container host and navigate to the following URL: “http://<container ip>//BlogEngine/” appended. For instance, you might enter: http://172.16.12.216/BlogEngine

To access the application from an external host that is connected to the container host’s network, you must use the Container Host IP address and mapped port of the web container. The mapped port of the web container endpoint is displayed from docker-compose ps or docker ps commands. For instance, you might enter: http://10.123.174.107:3658/BlogEngine

The blog application may take a moment to load, but soon your browser should present the following page.

Screenshot of page

Screenshot of page

Taking Advantage of Service Discovery

Built in to Docker is Service Discovery, which offers two key benefits: service registration and service name to IP (DNS) mapping. Service Discovery is especially valuable in the context of scaled-out applications, as it allows multi-container services to be discovered and referenced in the same way as single container services; with Service Discovery, intra-application communication is simple and concise—any service can be referenced by name, regardless of the number of container instances that are being used to run that service.

Service registration is the piece of Service Discovery that makes it possible for containers/services on a given network to discover each other by name. As a result of service registration, every application service is registered with a set of internal IP addresses for the container endpoints that are running that service. With this mapping, DNS resolution in the Docker Engine responds to any application endpoint seeking to communicate with a given service by sending a randomly ordered list of the container IP addresses associated with that service. The DNS client in the requesting container then chooses one of these IPs for container-container communication. This is referred to as DNS load-balancing.

Through DNS mapping Docker abstracts away the added complexity of managing multiple container endpoints; because of this piece of Service Discovery a single service can be treated as an atomic entity, no matter how many container instances it has running behind the scenes.

Note: For further context on Service Discovery, visit this Docker resource. However, note that Windows does not support the “-link” options.

Scale-Out with ‘docker-compose scale’

DockerCompose Scale

While the service registration benefit of Service Discovery is leveraged by an application even when one container instance is running for each application service, a scaled-out scenario is required for the benefit of DNS load-balancing to truly take effect.

To run a scaled-out version of the blog application, use the following command (either in place of ‘docker-compose up’ or even after the compose application is up and running). This command will run the blog application with one container instance for the ‘web’ service and three container instances for the ‘db’ service.

docker-compose scale web=1 db=3

Recall that the docker-compose.yml file provided with the blog application project files does not allow for scaling multiple instances of the ‘web’ service. To scale the web service, the ‘ports’ option for the web service must be replaced with the ‘expose’ option. However, without a load-balancer in front of the web service, a user would need to reference individual container endpoint IPs and mapped ports for external access into the web front-end of this application. An improvement to this application would be to use volume mapping so that all ‘db’ container instances reference the same SQL database files. Stay tuned for a follow-on post on these topics.

Service Discovery in action

In this step, Service Discovery will be demonstrated through a simple interaction between the ‘web’ and ‘db’ application services. The idea here is to ping different instances of the ‘db’ service to see that Service Discovery allows it to be accessed as a single service, regardless of how many container instances are implementing the service.

Before you begin: Run the blog application using the ‘docker-compose scale’ instruction described above.

Return to your PowerShell session, and run the following command to ping the ‘db’ back-end service from your web service. Notice the IP address from which you receive a reply.

docker run blogengine ping db

Now run the ping command again, and notice whether or not you receive a reply from a different IP address (i.e. a different ‘db’ container instance).*

docker run blogengine ping db

The image below demonstrates the behavior you should see—after pinging 2-3 times, you should receive replied from at least two different ‘db’ container instances:

PowerShell Output

* There is a chance that Docker will return the set of IPs making up the ‘db’ service in the same order as your first request. In this case, you may not see a different IP address. Repeat the ping command until you receive a reply from a new instance.

Technical Note: Service Discovery implemented in Windows

On Linux, the Docker daemon starts a new thread in each container namespace to catch service name resolution requests. These requests are sent to the Docker engine which implements a DNS resolver and responds back to the thread in the container with the IP address/es of the container instance/s which correspond to the service name.

In Windows, service discovery is implemented differently due to the need to support both Windows Server Containers (shared Windows kernel) and Hyper-V Containers (isolated Windows kernel). Instead of starting a new thread in each container, the primary DNS server for the Container endpoint’s IP interface is set to the default gateway of the (NAT) network. A request to resolve the service name will be sent to the default gateway IP where it is caught by the Windows Host Networking Service (HNS) in the container host. The HNS service then sends the request to the Docker engine which replies with the IP address/es of the container instance/s for the service. HNS then returns the service name (DNS) query to the container.

Windows Container Networking

Actual Author:  Jason Messer

There is a lot excitement and energy around the introduction of Windows containers and Microsoft’s partnership with Docker. For Windows Server Technical Preview 5, we invested heavily in the container network stack to better align with the Docker management experience and brought our own networking expertise to add additional features and capabilities for Windows containers! This article will describe the Windows container networking stack, how to attach your containers to a network using Docker, and how Microsoft is making containers first-class citizens in the modern datacenter with Microsoft Azure Stack.

Introduction

Windows Containers can be used to host all sorts of different applications from web servers running Node.js to databases, to video streaming. These applications all require network connectivity in order to expose their services to external clients. So what does the network stack look like for Windows containers? How do we assign an IP address to a container or attach a container endpoint to a network? How do we apply advanced network policy such as maximum bandwidth caps or access control list (ACL) rules to a container?

Let’s dive into this topic by first looking at a picture of the container’s network stack in Figure 1.

architecture

Figure 1 – Windows Container Network Stack

All containers run inside a container host which could be a physical server, a Windows client, or a virtual machine. It is assumed that this container host already has network connectivity through a NIC card using WiFi or Ethernet which it needs to extend to the containers themselves. The container host uses a Hyper-V virtual switch to provide this connectivity to the containers and connects the containers to the virtual switch (vSwitch) using either a Host virtual NIC (Windows Server Containers) or a Synthetic VM NIC (Hyper-V Containers). Compare this with Linux containers which use a bridge device instead of the Hyper-V Virtual Switch and veth pairs instead of vNICs / vmNICs to provide this basic Layer-2 (Ethernet) connectivity to the containers themselves.

The Hyper-V virtual switch alone does not allow network services running in a container to be accessible from the outside world, however. We also need Layer-3 (IP) connectivity to correctly route packets to their intended destination. In addition to IP, we need higher-level networking protocols such as TCP and UDP to correctly address specific services running in a container using a port number (e.g. TCP Port 80 is typically used to access a web server). Additional Layer 4- 7 services such as DNS, DHCP, HTTP, SMB, etc. are also required for containers to be useful. All of these options and more are supported with Windows container networking.

Docker Network Configuration and Management Stack

New in Windows Server Technical Preview 5 (TP5) is the ability to setup container networking using the Docker client and Docker engine’s RESTful API. Network configuration settings can be specified either at container network creation time or at container creation time depending upon the scope of the setting. Reference MSDN article ( ) for more information.

The Windows Container Network management stack uses Docker as the management surface and the Windows Host Network Service (HNS) as a servicing layer to create the network “plumbing” underneath (e.g. vSwitch, WinNAT, etc.). The Docker engine communicates with HNS through a network plug-in (libnetwork). Reference Figure 2 to see the updated management stack.

Figure 2 – Management Stack

Figure 2 – Management Stack

With this Docker network plugin interfacing with the Windows network stack through HNS, users no longer have to create their own static port mappings or custom Windows Firewall rules for NAT as these are automatically created for you.

Example: Create static Port Mapping through Docker
Note: NetNatStaticMapping (and Firewall Rule) created automatically

screenshot

 

Networking Modes

Windows containers will attach to a container host network using one of four different network modes (or drivers). The networking mode used determines how the containers will be accessible to external clients, how IP addresses will be assigned, and how network policy will be enforced.

Each of these networking modes use an internal or external VM Switch – created automatically by HNS – to connect containers to the container host’s physical (or virtual) network. Briefly, the four networking modes are given below with recommended usage. Please refer to the MSDN article (link) for more in-depth information about each mode:

  • NAT – this is the default network mode and attaches containers to a private IP subnet. This mode is quick and easy to use in any environment.
  • Transparent – this networking mode attaches containers directly to the physical network without performing any address translation. Use this mode with care as it can quickly cause problems in the physical network when too many containers are running on a particular host.
  • L2 Bridge / L2 Tunnel – these networking modes should usually be reserved for private and public cloud deployments when containers are running on a tenant VM.

Note: The “NAT” VM Switch Type will no longer be available in Windows Server 2016 or Windows 10 client builds. NAT container networks can be created by specifying the “nat” driver in Docker or NAT Mode in PowerShell.

Example: Create Docker ‘nat’ network
Notice how VM Switch and NetNat are created automatically

screenshot2

Container Networking + Software Defined Networking (SDN)

Containers are increasingly becoming first-class citizens in the datacenter and enterprise alongside virtual machines. IaaS cloud tenants or enterprise business units need to be able to programmatically define network policy (e.g. ACLs, QoS, load balancing) for both VM network adapters as well as container endpoints. The Software Defined Networking (SDN) Stack (TechNet topic) in Windows Server 2016 allows customers to do just that by creating network policy for a specific container endpoint through the Windows Network Controller using either PowerShell scripts, SCVMM, or the new Azure Portal in the Microsoft Azure Stack.

In a virtualized environment, the container host will be a virtual machine running on a physical server. The Network Controller will send policy down to a Host Agent running on the physical server using standard SouthBound channels (e.g. OVSDB). The Host Agent will then program this policy into the VFP extension in the vSwitch on the physical server where it will be enforced. This network policy is specific to an IP address (e.g. container end-point) so that even though multiple container endpoints are attached through a container host VM using a single VM network adapter, network policy can still be granularly defined.

Using the L2 tunnel networking mode, all container network traffic from the container host VM will be forwarded to the physical server’s vSwitch. The VFP forwarding extension in this vSwitch will enforce the policy received from the Network Controller and higher-levels of the Azure Stack (e.g. Network Resource Provider, Azure Resource Manager, Azure Portal). Reference Figure 3 to see how this stack looks.

Figure 3 –Containers attaching to SDN overlay virtual network

Figure 3 –Containers attaching to SDN overlay virtual network

This will allow containers to join overlay virtual networks (e.g. VxLAN) created by individual cloud tenants to communicate across multi-node clusters and with other VMs. as well as receive network policy.

Future Goodness

We will continue to innovate in this space not only by adding code to the Windows OS but also by contributing code to the open source Docker project on GitHub. We want Windows container users to have full access to the rich set of network policy and be able to create this policy through the Docker client. We’re also looking at ways to apply network policy as close to the container endpoint as possible in order to shorten the data-path and thereby improve network throughput and decrease latency.

 

Please continue to offer your feedback and comments on how we can continue to improve Windows Containers!

~ Jason Messer