Tag Archives: Uncategorized

Announcing Windows 10 Insider Preview Build 16170 for PC

Hello Windows Insiders!

Today we are excited to be releasing a new build from our Development Branch! Windows 10 Insider Preview Build 16170 for PC has been released to Windows Insiders in the Fast ring. As we mentioned earlier this week, you won’t see many big noticeable changes or new features in new builds just yet. That’s because right now, we’re focused on making some refinements to OneCore and doing some code refactoring and other engineering work that is necessary to make sure OneCore is optimally structured for teams to start checking in code. This also means more bugs and other issues that could be slightly more painful to live with – so check your Windows Insider Program settings!

Windows Insider Program for Business is here!

We have one other exciting announcement about a program we co-created with our IT Professional Windows Insiders.

Back in mid-February at Microsoft Ignite in Australia, Bill Karagounis showcased our commitment to an important segment of the Windows Insider program – IT Professionals. As Bill stated, we’re incredibly honored to have IT Pros participating in the Windows Insider Program and to be evaluating Windows 10 and its features as part of their deployment process.

Since his announcement, we’ve continued to receive an overwhelming response from IT Professionals interested in helping us shape the future of the program with features specifically for business. One of the most frequent requests we received from Insiders was for the option to join the Windows Insider Program using corporate credentials (instead of the existing registration process which requires a personal Microsoft Account):

“I’m currently in the Windows Insider Program and would love to be able to test more business-oriented features internally. It would also be great to be able to recruit a few users to run Insider Builds, as well, using the corporate credentials. If there were mechanisms in place for me to see those users’ feedback and issues, that would be great, as well.” – Current Windows Insider at US-based Company

“I want more users in key areas to be able to test/evaluate/learn/feedback. Microsoft accounts are not allowed. We are using SCCM current release and want to establish steps before ‘release ready’ and ‘business ready’.” – Current Windows Insider at UK-based Company

“Due to the rapid release of Windows we need a different channel to where IT Pros can provide feedback to the Dev teams.” – Current Windows Insider at an Australian-based Company

Based on feedback like this, we’re excited to announce today that Insiders can now register for Windows 10 Insider Preview Builds on their PC using their corporate credentials in Azure Active Directory.

Using corporate credentials will enable you to increase the visibility of your organization’s feedback – especially on features that support productivity and business needs. You’ll also be able to better advocate for the needs of your organization, and have real-time dialogue with Microsoft on features critical to specific business needs. This dialogue, in turn, helps us identify trends in issues organizations are facing when deploying Windows 10 and deliver solutions to you more quickly.
We’ll be rolling out even more tools aimed at better supporting IT Professionals and business users in our Insider community. Stay tuned!

How to access the Windows Insider Program for Business features

Simply visit the Windows Insider Program site and click on the “For Business” tab. To access the new features, you must register using your corporate account in Azure Active Directory (AAD). This account is the same account that you use for Office 365 and other Microsoft services.

Once you’ve registered using your corporate credentials, you’ll find a set of resources that will help you get started with the Windows Insider Program for Business in your organization.

Don’t forget – After you register, enroll your Windows 10 PC to get the latest Windows 10 Insider Preview builds on your Windows 10 PC:

  • Go to Settings Updates & Security Windows Insider Program. (Make sure that you have administrator rights to your machine and that it has latest Windows updates.)
  • Click Get Started, enter your corporate credentials that you used to register, then follow the on-screen directions.

Windows Insider for Business participants partner with the Windows Development Team to discover and create features, infuse innovation, and plan for what’s around the bend. We’ve architected some great features together, received amazing feedback, and we’re not done!

In addition, the Windows Insider Program connects you to a global community of IT Pros in our new Microsoft Tech Community and helps provide you with the information and experience you need to grow not only your skills but your career as well. You’ll be hearing a LOT more from us in the coming months.

Windows Insider Program for Business Team

Keep the feedback coming!

Other changes, improvements, and fixes for PC

  • We fixed the issue causing your PC to fail to install new builds on reboot with the error 8024a112.
  • We have updated the share icon in File Explorer (in the Share tab) to match our new share iconography.
  • We fixed an issue where Cortana Reminders was displayed as a possible share target when Cortana wasn’t enabled.
  • We fixed an issue where Miracast sessions would disconnect a minute or so after the Connect UI was closed if the connection was a first time pairing.
  • We fixed a high-DPI issue when “System (Enhanced)” scaling is enabled so as to now correctly display certain applications that use display graphics accelerated contents.
  • Turning the night light schedule off in Settings now turns night light off immediately.

Known issues for PC

  • Narrator will not work on this build. If you require Narrator to work, you should move to the Slow ring until we get this bug fixed.
  • Some Insiders have reported seeing this error “Some updates were cancelled. We’ll keep trying in case new updates become available” in Windows Update. See this forum post for more details.
  • Some apps and games may crash due to a misconfiguration of advertising ID that happened in a prior build. Specifically, this issue affects new user accounts that were created on Build 15031. The misconfiguration can continue to persist after upgrading to later builds. The ACL on the registry key incorrectly denies access to the user and you can delete the following registry key to get out of this state: HKCUSoftwareMicrosoftWindowsCurrentVersionAdvertisingInfo.
  • There is a bug where if you need to restart your PC due to a pending update like with the latest Surface firmware updates, the restart reminder dialog doesn’t pop up. You should check Settings > Update & security > Windows Update to see if a restart is required.
  • Certain hardware configurations may cause the broadcast live review window in the Game bar to flash Green while you are Broadcasting. This does not affect the quality of your broadcast and is only visible to the Broadcaster. Make sure you have the latest graphics drivers.
  • Double-clicking on the Windows Defender icon in the notification area does not open Windows Defender. Right-clicking on the icon and choosing open will open Windows Defender.
  • Surface 3 devices fail to update to new builds if a SD memory card is inserted. The updated drivers for the Surface 3 that fix this issue have not yet been published to Windows Update.
  • Pressing F12 to open the Developer Tools in Microsoft Edge while F12 is open and focused may not return focus to the tab F12 is opened against, and vice-versa.
  • The Action Center may get into a state where dismissing one notification unexpectedly dismisses multiple. If this happens, please try rebooting your device.

Keep hustling team,
Dona <3

Read More

Time to check your Windows Insider Program settings!

Hello Windows Insiders!

It’s that time again! We’re getting ready to start releasing new builds from our Development Branch. And just like before after the release of a new Windows 10 update, you won’t see many big noticeable changes or new features in new builds just yet. That’s because right now, we’re focused on making some refinements to OneCore and doing some code refactoring and other engineering work that is necessary to make sure OneCore is optimally structured for teams to start checking in code. Now comes our standard warning that these new builds from our Development Branch may include more bugs and other issues that could be slightly more painful for some people to live with. So, if this makes you uncomfortable, you can change your ring by going to Settings > Update & security > Windows Insider Program and moving to the Slow or Release Preview rings for more stable builds.

Additionally, if you are an Windows Insider who wants to stay on the Windows 10 Creators Update – you will need to go to Settings > Update & security > Windows Insider Program and press the “Stop Insider Preview builds” button.

Windows Insider Settings page

A menu will pop up and you will need to choose “Keep giving me builds until the next Windows release”. This will keep you on the Windows 10 Creators Update.

“Keep giving me builds until the next Windows release”

We’re excited to get some new builds out to Insiders soon!

Keep hustling,
Dona <3

Read More

Announcing free Microsoft Edge testing in partnership with BrowserStack

Today, we’re thrilled to announce a partnership with BrowserStack, a leader in mobile and web testing, to provide remote virtual testing on Microsoft Edge for free. Until now, developers who need to test against a specific version of Microsoft Edge have been limited to local virtual machines, or PCs with Windows 10 installed. However, there are many developers that don’t have easy access to Microsoft Edge for testing purposes.

Screen capture showing Microsoft Edge running inside a browser on macOS.

BrowserStack Live Testing can run Microsoft Edge inside your browser on macOS, Windows, or Linux.

Today, we are excited to partner with BrowserStack, which provides the industry’s fastest testing on physical devices and browsers, so that you can focus on delivering customers the best version of your product or website. BrowserStack is trusted by developers at over 36,000 companies, including Microsoft, to help make the testing process faster and more accessible. Under this new partnership, developers will be able to sign into BrowserStack and test Microsoft Edge using their Live and Automate services for free.

Live testing provides a remote, cloud-based instance of Microsoft Edge streamed over the web. You can interact with the cloud-based browser just as you would an installed browser, within your local browser on any platform – whether it’s macOS, Linux, or older versions of Windows.

As testing setups are becoming more automated, we are excited to also offer BrowserStack’s Automate testing service under this partnership, for free. This method of testing allows you to run up to 10 Microsoft Edge test sessions via script, which can integrate with your local test runners via the standardized WebDriver API. You can even configure your machine so that the cloud-based browser can see your local development environment—see the Local Testing instructions at BrowserStack to learn more.


Testing Microsoft Edge in BrowserStack using WebDriver automation

To ensure you can test against all possible versions of Microsoft Edge that your users may be using, BrowserStack will be providing three versions of Microsoft Edge for testing: the two most recent “Stable” channel releases, and the most recent “Preview” release (via the Windows Insider Preview Fast ring).

You can test Microsoft Edge on the Windows 10 Anniversary Update (EdgeHTML 14) starting today. EdgeHTML 15 will be available in the Windows 10 Creators Update starting on April 11, 2017, and will come to BrowserStack in the following weeks.

BrowserStack currently serves more than 36,000 companies globally including Microsoft, AirBnB, and MasterCard. In addition to Microsoft Edge, the service provides more than 1100 combinations of operating systems and browsers and its Real Device Cloud allows anyone, anywhere to test their website on a physical Android or iOS device. With data centers located around the world, BrowserStack is trusted by over 1.6 million developers relying upon the service as it provides the fastest and most accurate testing on physical devices.

We’re very excited to partner with BrowserStack to make this testing service free for Microsoft Edge. Head over to BrowserStack and sign up to get started testing your site in Microsoft Edge today.

― Jason Weber, Director of Program Management, Microsoft Edge

Read More

Phone Emulator + Containers + VMs + Networking == Finally working!

I have had a problem for a little while now – the problem is that on my personal laptop I want to use:

  • Visual Studio with the Windows Phone Emulator
  • The Hololens emulator
  • Windows Containers
  • Linux Containers (through Docker for Windows)
  • My virtual machines

However, all of these solutions keep on tripping over each other.  Specifically, they keep on tripping over each other when it comes to networking configuration.  I have spent the last couple of months complaining to the various teams involved in this – and I finally have it all working!  Yay!

There were three key things that came together to make this all work:

  1. Improved guidance around Container and VM networking

    The networking team has been doing a great job of updating the NAT and Container networking documentation.  If you read these documents a couple of months ago – I would highly recommend you revisit them as there is a ton of new information in there.

  2. New installation experience for Windows Containers on Windows 10

    Another thing that has changed in the last couple months is the process for getting Windows Containers up and running on Windows 10.  Specifically – we now utilize Docker for Windows to get you up and running.  Not only does this make it much easier to get things setup – it means you get very clear error messages when things go wrong.  In my case – I received this handy error message:

    NAT Error

  3. Learning about XDECleanup

    It turns out that my problem was that I had a stale network configuration from the Windows Phone emulator.  Handily, the Windows Phone Emulator team ship a tool to help out here – XDECleanup.  If you open an administrative command prompt and run “C:Program Files (x86)Microsoft XDE<version>XdeCleanup.exe” – it will delete and recreate all networking associated with the Windows Phone Emulator.

For me – updating my container setup to use Docker for Windows combined with running XDECleanup finally got me to a world where all my virtualization based development tools happily work side by side.

Cheers,
Ben

Read More

Use NGINX to load balance across your Docker Swarm cluster

A practical walkthrough, in six steps

This basic example demonstrates NGINX and swarm mode in action, to provide the foundation for you to apply these concepts to your own configurations.

This document walks through several steps for setting up a containerized NGINX server and using it to load balance traffic across a swarm cluster. For clarity, these steps are designed as an end-to-end tutorial for setting up a three node cluster and running two docker services on that cluster; by completing this exercise, you will become familiar with the general workflow required to use swarm mode and to load balance across Windows Container endpoints using an NGINX load balancer.

The basic setup

This exercise requires three container hosts–two of which will be joined to form a two-node swarm cluster, and one which will be used to host a containerized NGINX load balancer. In order to demonstrate the load balancer in action, two docker services will be deployed to the swarm cluster, and the NGINX server will be configured to load balance across the container instances that define those services. The services will both be web services, hosting simple content that can be viewed via web browser. With this setup, the load balancer will be easy to see in action, as traffic is routed between the two services each time the web browser view displaying their content is refreshed.

The figure below provides a visualization of this three-node setup. Two of the nodes, the “Swarm Manager” node and the “Swarm Worker” node together form a two-node swarm mode cluster, running two Docker web services, “S1” and “S2”. A third node (the “NGINX Host” in the figure) is used to host a containerized NGINX load balancer, and the load balancer is configured to route traffic across the container endpoints for the two container services. This figure includes example IP addresses and port numbers for the two swarm hosts and for each of the six container endpoints running on the hosts.

configuration

System requirements

Three* or more computer systems running Windows 10 Creators Update (available today for members of the Windows Insiders program), setup as a container host (see the topic, Windows Containers on Windows 10 for more details on how to get started with Docker containers on Windows 10).

Additionally, each host system should be configured with the following:

  • The microsoft/windowsservercore container image
  • Docker Engine v1.13.0 or later
  • Open ports: Swarm mode requires that the following ports be available on each host.
    • TCP port 2377 for cluster management communications
    • TCP and UDP port 7946 for communication among nodes
    • TCP and UDP port 4789 for overlay network traffic

*Note on using two nodes rather than three:
These instructions can be completed using just two nodes. However, currently there is a known bug on Windows which prevents containers from accessing their hosts using localhost or even the host’s external IP address (for more background on this, see Caveats and Gotchas below). This means that in order to access docker services via their exposed ports on the swarm hosts, the NGINX load balancer must not reside on the same host as any of the service container instances.
Put another way, if you use only two nodes to complete this exercise, one of them will need to be dedicated to hosting the NGINX load balancer, leaving the other to be used as a swarm container host (i.e. you will have a single-host swarm cluster and a host dedicated to your containerized NGINX load balancer).

Step 1: Build an NGINX container image

In this step, we’ll build the container image required for your containerized NGINX load balancer. Later we will run this image on the host that you have designated as your NGINX container host.

Note: To avoid having to transfer your container image later, complete the instructions in this section on the container host that you intend to use for your NGINX load balancer.

NGINX is available for download from nginx.org. An NGINX container image can be built using a simple Dockerfile that installs NGINX onto a Windows base container image and configures the container to run as an NGINX executable. For the purpose of this exercise, I’ve made a Dockerfile downloadable from my personal GitHub repo; access the NGINX Dockerfile here, then save it to some location (e.g. C:tempnginx) on your NGINX container host machine. From that location, build the image using the following command:

C:tempnginx> docker build -t nginx .

Now the image should appear with the rest of the docker images on your system (you can check this using the docker images command).

(Optional) Confirm that your NGINX image is ready

First, run the container:

C:temp> docker run -it -p 80:80 nginx

Next, open a new cmdlet window and use the docker ps command to see that the container is running. Note its ID. The ID of your container is the value of <CONTAINERID> in the next command.

Get the container’s IP address:

C:temp> docker exec <CONTAINERID> ipconfig

For example, your container’s IP address may be 172.17.176.155, as in the example output shown below.

nginxipconfig

Next, open a browser on your container host and put your container’s IP address in the address bar. You should see a confirmation page, indicating that NGINX is successfully running in your container.

nginxconfirmation

 

Step 2: Build images for two containerized IIS Web services

In this step, we’ll build container images for two simple IIS-based web applications. Later, we’ll use these images to create two docker services.

Note: Complete the instructions in this section on one of the container hosts that you intend to use as a swarm host.

Build a generic IIS Web Server image

On my personal GitHub repo, I have made a Dockerfile available for creating an IIS Web server image. The Dockerfile simply enables the Internet Information Services (IIS) Web server role within a microsoft/windowsservercore container. Download the Dockerfile from here, and save it to some location (e.g. C:tempiis) on one of the host machines that you plan to use as a swarm node. From that location, build the image using the following command:

 C:tempiis> docker build -t iis-web .

(Optional) Confirm that your IIS Web server image is ready

First, run the container:

 C:temp> docker run -it -p 80:80 iis-web

Next, use the docker ps command to see that the container is running. Note its ID. The ID of your container is the value of <CONTAINERID> in the next command.

Get the container’s IP address:

C:temp> docker exec <CONTAINERID> ipconfig

Now open a browser on your container host and put your container’s IP address in the address bar. You should see a confirmation page, indicating that the IIS Web server role is successfully running in your container.

iisconfirmation

Build two custom IIS Web server images

In this step, we’ll be replacing the IIS landing/confirmation page that we saw above with custom HTML pages–two different pages, corresponding to two different web container images. In a later step, we’ll be using our NGINX container to load balance across instances of these two images. Because the images will be different, we will easily see the load balancing in action as it shifts between the content being served by the containers instances of the two images.

First, on your host machine create a simple file called, index_1.html. In the file type any text. For example, your index_1.html file might look like this:

index1

Now create a second file, index_2.html. Again, in the file type any text. For example, your index_2.html file might look like this:

index2

Now we’ll use these HTML documents to make two custom web service images.

If the iis-web container instance that you just built is not still running, run a new one, then get the ID of the container using:

C:temp> docker exec <CONTAINERID> ipconfig

Now, copy your index_1.html file from your host onto the IIS container instance that is running, using the following command:

C:temp> docker cp index_1.html <CONTAINERID>:C:inetpubwwwrootindex.html

Next, stop and commit the container in its current state. This will create a container image for the first web service. Let’s call this first image, “web_1.”

C:> docker stop <CONTAINERID>
C:> docker commit <CONTAINERID> web_1

Now, start the container again and repeat the previous steps to create a second web service image, this time using your index_2.html file. Do this using the following commands:

C:> docker start <CONTAINERID>
C:> docker cp index_2.html <CONTAINERID>:C:inetpubwwwrootindex.html
C:> docker stop <CONTAINERID>
C:> docker commit <CONTAINERID> web_2

You have now created images for two unique web services; if you view the Docker images on your host by running docker images, you should see that you have two new container images—“web_1” and “web_2”.

Put the IIS container images on all of your swarm hosts

To complete this exercise you will need the custom web container images that you just created to be on all of the host machines that you intend to use as swarm nodes. There are two ways for you to get the images onto additional machines:

  • Option 1: Repeat the steps above to build the “web_1” and “web_2” containers on your second host.
  • Option 2 [recommended]: Push the images to your repository on Docker Hub then pull them onto additional hosts.

A note on Docker Hub:
Using Docker Hub is a convenient way to leverage the lightweight nature of containers across all of your machines, and to share your images with others. Visit the following Docker resources to get started with pushing/pulling images with Docker Hub:
Create a Docker Hub account and repository
Tag, push and pull your image

 

Step 3: Join your hosts to a swarm

As a result of the previous steps, one of your host machines should have the nginx container image, and the rest of your hosts should have the Web server images, “web_1” and “web_2”. In this step, we’ll join the latter hosts to a swarm cluster.

Note: The host running the containerized NGINX load balancer cannot run on the same host as any container endpoints for which it is performing load balancing; the host with your nginx container image must be reserved for load balancing only. For more background on this, see Caveats and Gotchas below.

First, run the following command from any machine that you intend to use as a swarm host. The machine that you use to execute this command will become a manager node for your swarm cluster.

  • Replace <HOSTIPADDRESS> with the public IP address of your host machine
C:temp> docker swarm init --advertise-addr=<HOSTIPADDRESS> --listen-addr <HOSTIPADDRESS>:2377

Now run the following command from each of the other host machines that you intend to use as swarm nodes, joining them to the swarm as a worker nodes.

  • Replace <MANAGERIPADDRESS> with the public IP address of your host machine (i.e. the value of <HOSTIPADDRESS> that you used to initialize the swarm from the manager node)
  • Replace <WORKERJOINTOKEN> with the worker join-token provided as output by the docker swarm init command (you can also obtain the join-token by running docker swarm join-token worker from the manager host)
C:temp> docker swarm join --token <WORKERJOINTOKEN> <MANAGERIPADDRESS>:2377

Your nodes are now configured to form a swarm cluster! You can see the status of the nodes by running the following command from your manage node:

C:temp> docker node ls

Step 4: Deploy services to your swarm

Note: Before moving on, stop and remove any NGINX or IIS containers running on your hosts. This will help avoid port conflicts when you define services. To do this, simply run the following commands for each container, replacing <CONTAINERID> with the ID of the container you are stopping/removing:

C:temp> docker stop <CONTAINERID>
C:temp> docker rm <CONTAINERID>

Next, we’re going to use the “web_1” and “web_2” container images that we created in previous steps of this exercise to deploy two container services to our swarm cluster.

To create the services, run the following commands from your swarm manager node:

C: > docker service create --name=s1 --publish mode=host,target=80 --endpoint-mode dnsrr web_1 powershell -command {echo sleep; sleep 360000;}
C: > docker service create --name=s2 --publish mode=host,target=80 --endpoint-mode dnsrr web_2 powershell -command {echo sleep; sleep 360000;}

You should now have two services running, s1 and s2. You can view their status by running the following command from your swarm manager node:

C: > docker service ls

Additionally, you can view information on the container instances that define a specific service with the following commands (where <SERVICENAME> is replaced with the name of the service you are inspecting (for example, s1 or s2):

# List all services
C: > docker service ls
# List info for a specific service
C: > docker service ps <SERVICENAME>

(Optional) Scale your services

The commands in the previous step will deploy one container instance/replica for each service, s1 and s2. To scale the services to be backed by multiple replicas, run the following command:

C: > docker service scale <SERVICENAME>=<REPLICAS>
# e.g. docker service scale s1=3

Step 5: Configure your NGINX load balancer

Now that services are running on your swarm, you can configure the NGINX load balancer to distribute traffic across the container instances for those services.

Of course, generally load balancers are used to balance traffic across instances of a single service, not multiple services. For the purpose of clarity, this example uses two services so that the function of the load balancer can be easily seen; because the two services are serving different HTML content, we’ll clearly see how the load balancer is distributing requests between them.

The nginx.conf file

First, the nginx.conf file for your load balancer must be configured with the IP addresses and service ports of your swarm nodes and services. An example nginx.conf file was included with the NGINX download that was used to create your nginx container image in Step 1. For the purpose of this exercise, I copied and adapted the example file provided by NGINX and used it to create a simple template for you to adapt with your specific node/container information.

Download the nginx.conf file template that I prepared for this exercise from my personal GitHub repo, and save it onto your NGINX container host machine. In this step, we’ll adapt the template file and use it to replace the default nginx.conf file that was originally downloaded onto your NGINX container image.

You will need to adjust the file by adding the information for your hosts and container instances. The template nginx.conf file provided contains the following section:

upstream appcluster {
     server <HOSTIP>:<HOSTPORT>;
     server <HOSTIP>:<HOSTPORT>;
     server <HOSTIP>:<HOSTPORT>;
     server <HOSTIP>:<HOSTPORT>;
     server <HOSTIP>:<HOSTPORT>;
     server <HOSTIP>:<HOSTPORT>;
 }

To adapt the file for your configuration, you will need to adjust the <HOSTIP>:<HOSTPORT> entries in the template config file. You will have an entry for each container endpoint that defines your web services. For any given container endpoint, the value of <HOSTIP> will be the IP address of the container host upon which that container is running. The value of <HOSTPORT> will be the port on the container host upon which the container endpoint has been published.

When the services, s1 and s2, were defined in the previous step of this exercise, the --publish mode=host,target=80 parameter was included. This parameter specified that the container instances for the services should be exposed via published ports on the container hosts. More specifically, by including --publish mode=host,target=80 in the service definitions, each service was configured to be exposed on port 80 of each of its container endpoints, as well as a set of automatically defined ports on the swarm hosts (i.e. one port for each container running on a given host).

First, identify the host IPs and published ports for your container endpoints

Before you can adjust your nginx.conf file, you must obtain the required information for the container endpoints that define your services. To do this, run the following commands (again, run these from your swarm manager node):

C: > docker service ps s1
C: > docker service ps s2

The above commands will return details on every container instance running for each of your services, across all of your swarm hosts.

  • One column of the output, the “ports” column, includes port information for each host of the form *:<HOSTPORT>->80/tcp. The values of <HOSTPORT> will be different for each container instance, as each container is published on its own host port.
  • Another column, the “node” column, will tell you which machine the container is running on. This is how you will identify the host IP information for each endpoint.

You now have the port information and node for each container endpoint. Next, use that information to populate the upstream field of your nginx.conf file; for each endpoint, add a server to the upstream field of the file, replacing the field with the IP address of each node (if you don’t have this, run ipconfig on each swarm host machine to obtain it), and the field with the corresponding host port.

For example, if you have two swarm hosts (IP addresses 172.17.0.10 and 172.17.0.11), each running three containers your list of servers will end up looking something like this:

upstream appcluster {
     server 172.17.0.10:21858;
     server 172.17.0.11:64199;
     server 172.17.0.10:15463;
     server 172.17.0.11:56049;
     server 172.17.0.11:35953;
     server 172.17.0.10:47364;
}

Once you have changed your nginx.conf file, save it. Next, we’ll copy it from your host to the NGINX container image itself.

Replace the default nginx.conf file with your adjusted file

If your nginx container is not already running on its host, run it now:

C:temp> docker run -it -p 80:80 nginx

Next, open a new cmdlet window and use the docker ps command to see that the container is running. Note its ID. The ID of your container is the value of <CONTAINERID> in the next command.

Get the container’s IP address:

C:temp> docker exec <CONTAINERID> ipconfig

With the container running, use the following command to replace the default nginx.conf file with the file that you just configured (run the following command from the directory in which you saved your adjusted version of the nginx.conf on the host machine):

C:temp> docker cp nginx.conf <CONTAINERID>:C:nginxnginx-1.10.3conf

Now use the following command to reload the NGINX server running within your container:

C:temp> docker exec <CONTAINERID> nginx.exe -s reload

Step 6: See your load balancer in action

Your load balancer should now be fully configured to distribute traffic across the various instances of your swarm services. To see it in action, open a browser and

  • If accessing from the NGINX host machine: Type the IP address of the nginx container running on the machine into the browser address bar. (This is the value of <CONTAINERID> above).
  • If accessing from another host machine (with network access to the NGINX host machine): Type the IP address of the NGINX host machine into the browser address bar.

Once you’ve typed the applicable address into the browser address bar, press enter and wait for the web page to load. Once it loads, you should see one of the HTML pages that you created in step 2.

Now press refresh on the page. You may need to refresh more than once, but after just a few times you should see the other HTML page that you created in step 2.

If you continue refreshing, you will see the two different HTML pages that you used to define the services, web_1 and web_2, being accessed in a round-robin pattern (round-robin is the default load balancing strategy for NGINX, but there are others). The animated image below demonstrated the behavior that you should see.

As a reminder, below is the full configuration with all three nodes. When you’re refreshing your web page view, you’re repeatedly accessing the NGINX node, which is distributing your GET request to the container endpoints running on the swarm nodes. Each time you resend the request, the load balancer has the opportunity to route you to a different endpoint, resulting in your being served a different web page, depending on whether your request was routed to an S1 or S2 endpoint.

configuration_full

Caveats and gotchas

Q: Is there a way to publish a single port for my service, so that I can load balance across each of my services rather than each of the individual endpoints for my services?

Unfortunately, we do not yet support publishing a single port for a service on Windows. This feature is swarm mode’s routing mesh feature—a feature that allows you to publish ports for a service, so that that service is accessible to external resources via that port on every swarm node.

Routing mesh for swarm mode on Windows is not yet supported, but will be coming soon.

 

Q: Why can’t I run my containerized load balancer on one of my swarm nodes?

Currently, there is a known bug on Windows, which prevents containers from accessing their hosts using localhost or even the host’s external IP address. This means containers cannot access their host’s exposed ports—the can only access exposed ports on other hosts.

In the context of this exercise, this means that the NGINX load balancer must be running on its own host, and never on the same host as any services that it needs to access via exposed ports. Put another way, for the containerized NGINX load balancer to balance across the two web services defined in this exercise, s1 and s2, it cannot be running on a swarm node—if it were running on a swarm node, it would be unable to access any containers on that node via host exposed ports.

Of course, an additional caveat here is that containers do not need to be accessed via host exposed ports. It is also possible to access containers directly, using the container IP and published port. If this instead were done for this exercise, the NGINX load balancer would need to be configured to:

  • Access containers that share its host by their container IP and port
  • Access containers that do not share its host by their host’s IP and exposed port

There is no problem with configuring the load balancer in this way, other than the added complexity that it introduces compared to simply putting the load balancer on its own machine, so that containers can be uniformly accessed via their host’s IPs and exposed ports.

Read More

New Hypervisor Top-Level Functional Specification

At the end of last week we published version 5.0 of the Hypervisor Top Level Functional Specification.  This version details the state of the hypervisor in Windows Server 2016.  You can download it from here:

https://docs.microsoft.com/en-us/virtualization/hyper-v-on-windows/reference/tlfs

Be warned – this is a very complicated technical document.  However, it has also become the first place that I point anyone who wants to know more about the internal workings of Hyper-V towards.

Cheers,
Ben

Read More

Linux Integration Services 4.1.3-2

Linux Integration Services has been update to version 4.1.3-2 and is available from https://www.microsoft.com/en-us/download/details.aspx?id=51612

This is a minor update to correct the RPMs for a kernel ABI change in Red Hat Enterprise Linux, CentOS, and Oracle Linux’s Red Hat Compatible Kernel version 7.3. Version 3.10.0-514.10.2.el7 of the kernel was sufficiently different for symbol conflicts to break the LIS kernel modules and create a situation where a VM would not start correctly. This version of the modules is compatible with the new kernel.

Read More

Giving a Workgroup Server an FQDN

Recently I needed to be able to securely, remotely manage a set of Windows Servers that were not domain joined.  One problem that I hit while setting this up was that each of the servers did not believe that they had a valid FQDN.

For example – I could:

  • Set the name of a computer to “HyperVSV1”
  • Create a DNS entry that said that “HyperVSV1.mydomain.com” resolved to that computer
  • I could then correctly ping the computer at that address

But when I tried to use tools like PowerShell Remoting or Remote Desktop – they would complain that “HyperVSV1.mydomain.com” did not believe it was “HyperVSV1.mydomain.com”.

Thankfully, this is relatively easy to fix.

If you open PowerShell and run the following two commands:

Set-ItemProperty "HKLM:SYSTEMCurrentControlSetServicesTcpipParameters" -Name Domain -Value "mydomain.com"
Set-ItemProperty "HKLM:SYSTEMCurrentControlSetServicesTcpipParameters" -Name "NV Domain" -Value "mydomain.com"

After this your workgroup server will correctly identify itself with a valid FQDN.

Cheers,
Ben

Read More

Steelcase and Microsoft announce development of technology-enabled spaces designed to boost creative work

GRAND RAPIDS, Mich. and REDMOND, Wash. – March 6, 2017 – Steelcase and Microsoft Corp. have joined forces to explore the future of work, developing a range of technology- enabled spaces designed to help organizations foster creative thinking and better collaboration. These spaces seamlessly integrate the best of Microsoft Surface devices with Steelcase architecture and furniture. Today the companies unveiled five new “Creative Spaces” showcasing how Steelcase and Microsoft can help organizations unlock creativity for every employee.

Additionally, Steelcase and Microsoft announced:

  • That Microsoft is expanding its partner network into the world of design by bringing in select Steelcase dealers as authorized Surface Hub resellers.
  • Steelcase and Microsoft are working together to develop technology-enabled workplace solutions built on Microsoft Azure IoT technology.

“The problems people face at work today are much more complex than they used to be. They require a new creative way of thinking and a very different work process,” says Sara Armbruster, vice president of strategy, research and new business innovation for Steelcase. “We believe that everyone has the capacity for creative thinking, and people are happier doing creative, productive work. Together, Microsoft and Steelcase will help organizations thoughtfully integrate place and technology to encourage creative behaviors at work.”

The Problem: Fostering Creativity as a Business Advantage

According to joint research conducted by Steelcase and Microsoft, creativity is seen as a critical job skill driven by organizations’ need for innovation and growth in addition to employees’ desire for meaningful work. However, today many organizations invest in technology and space as separate entities rather than approaching them holistically. The lack of cohesion creates sub-optimal conditions for fostering creativity at work.

The research released today (of 515 US and Canadian companies with 100+ employees)[i] reveals the pressure people feel about the shift toward more creative work:

  • Seventy-two percent of workers from diverse fields including Health Care, Retail, Education, Financial Services and Manufacturing believe their future success depends on their ability to be creative.
  • Seventy-six percent believe emerging technologies will change their jobs, requiring more creative skills as routine work becomes automated.
  • There is greater need to collaborate in business, yet only 25 percent of respondents feel they can be creative in the places they currently have available for group work.
  • The study also reveals the connection between creativity and privacy, as employees ranked having a place to work without disruption as the second highest factor that could improve creativity, just behind the need for more time to think.

Creative Spaces

The companies’ exploration of creative work found that creativity is a process in which anyone can engage and requires diverse work modes as well as different types of technology. People need to work alone, in pairs and in different size groups throughout a creative process, and they need a range of devices that are mobile and integrated into the physical workplace. Additionally, spaces should inspire people without compromising performance.

“Every Microsoft Surface device strives to enable the creator in each of us. Devices like Surface Studio and Surface Hub are fundamentally designed around how people naturally create, connect, and collaborate,” says Ryan Gavin, general manager, Microsoft Surface Marketing. “With Steelcase we have the compelling opportunity to blend place and technology into a seamless environment that allows our most important asset, our people, to unlock their creativity and share that with others. The future of work is creative.”

“Most employees are still working with outdated technology and in places that are rooted in the past, which makes it difficult for them to work in new, creative ways,” said Bob O’Donnell, president, founder and chief analyst at TECHnalysis Research. “Creative Spaces were clearly designed to bridge the current gap between place and technology and to help creative work happen more naturally.”

Five initial Creative Spaces are on display now at the Steelcase WorkLife Center in New York City. Spaces include:

Focus Studio: Individual creative work requires alone time to focus and get into flow, while also allowing quick shifts to two-person collaboration. This is a place to let ideas incubate before sharing them with a large group, perfect for focused work with Microsoft Surface Book or Surface Pro 4.

Duo Studio: Working in pairs is an essential behavior of creativity. This space enables two people to co-create shoulder-to-shoulder, while also supporting individual work with Microsoft Surface Studio. It includes a lounge area to invite others in for a quick creative review with Surface Hub or to put your feet up and get away without going away.

Ideation Hub: A high-tech destination that encourages active participation and equal opportunity to contribute as people co-create, refine and share ideas with co-located or distributed teammates on Microsoft Surface Hub.

Maker Commons: Socializing ideas and rapid prototyping are essential parts of creativity. This space is designed to encourage quick switching between conversation, experimentation and concentration, ideal for a mix of Surface devices, such as Surface Hub and Surface Book.

Respite Room: Creative work requires many brain states, including the need to balance active group work with solitude and individual think time. This truly private room allows relaxed postures to support diffused attention.

“We are facing a time of unprecedented change at work. Through this partnership we will bring together space and technology to help workers and organizations solve the workplace challenges they face today and in the future and ultimately perform their best at work,” explains Armbruster.

Steelcase: Microsoft Surface Hub Reseller

Select Steelcase dealers are authorized to resell Microsoft Surface Hub as a part of the Microsoft partner network beginning today in the United States and Canada, and in later Summer 2017 additional dealers in Germany and the United Kingdom are expected to be added to the program. The companies will announce additional markets in the coming months. As the spaces roll out in the Americas, Europe and Asia Pacific, the range of spaces will continue to expand and evolve.

Internet of Things

In the coming months, Steelcase expects to announce new technology-enabled office solutions built on Microsoft Azure IoT technology, which will provide companies with analytics that help improve workplaces and solutions to help employees find the best places to do diverse types of work within the office.

For more information on Creative Spaces and the partnership between Microsoft and Steelcase, visit www.steelcase.com/creativity or www.microsoft.com/en-us/devices/business/steelcase.

About Microsoft

Microsoft (Nasdaq “MSFT” @Microsoft) is the leading platform and productivity company for the mobile-first, cloud-first world, and its mission is to empower every person and every organization on the planet to achieve more.

About Steelcase Inc.

For over 100 years, Steelcase Inc. has helped create great experiences for the world’s leading organizations. We demonstrate this through our family of brands – including Steelcase®, Coalesse®, Designtex® PolyVision® and Turnstone®. Together, they offer a comprehensive portfolio of architecture, furniture and technology products and services designed to unlock human promise and support social, economic and environmental sustainability. The company is globally accessible through a network of channels, including over 800 dealer locations. Steelcase is a global, industry-leading and publicly traded company with fiscal 2016 revenue of $3.1 billion.

https://www.steelcase.com/press-releases/steelcase-microsoft-announce-development-technology-enabled-spaces-designed-boost-creative-work/ 


[i] Based on a Microsoft and Steelcase February 2017 study of 515 US and Canadian companies with 100+ employees.

Read More

How to give us feedback

We love hearing from you.  So what’s the best way to give us feedback?

The best way to report an issue or give a quick suggestion is the Feedback Hub on Windows 10 (Windows key + F to open it quickly). The feedback hub lets the product team see all of your feedback in one place, and allows other users to upvote and provide further comments. It’s also tightly integrated with our bug tracking and engineering processes, so that we can keep an eye on what users are saying and use this data to help prioritize fixes and feature requests, and so that you can follow up and see what we’re doing about it.

In the latest build, we have reintroduced the Hyper-V feedback category.

After typing your feedback, selecting “Show category suggestions” should help you find the Hyper-V category under Apps and Games. It looks like a couple people have already discovered the new category:

 

Hyper-V feedback

When you put your feedback in the Hyper-V category, we are also able to collect relevant event logs to help diagnose issues. To provide more information about a problem that you can reproduce, hit “begin monitoring”, reproduce the issue, and then “stop monitoring”. This allows us to collect relevant diagnostic information to help reproduce and fix the problem.

Begin monitoring

We also love to hear from you in our forums if there are any issues you are running into. This is a good place to get direct help from the product group as well as community members. Hyper-V Forums

Hyper-V Forums

That’s all for now. Looking forward to seeing your feedback!

Cheers,
Andy

Read More