Tag Archives: Siron

[VIDEO] Hyper-V Masterclass – Debunking Virtual Domain Controller Myths

Should Hyper-V be in the domain? Can Hyper-V host its own domain controller? Eric Siron confronts some potentially crippling myths about Hyper-V and domain controllers in this video and also boots up an instance to put these mistruths to rest

Read the post here: [VIDEO] Hyper-V Masterclass – Debunking Virtual Domain Controller Myths

The Complete Guide to Hyper-V 2016 Integration Services

28 Sep 2017 by Eric Siron
   
0    
Hyper-V Articles

Isolation represents one of the fundamental features of a hypervisor. If we didn’t want isolation, we would have little need to virtualize anything. We could install one operating system on a host and add software until we ran out of capacity. Security and compatibility prevent that approach. However, too much isolation causes other problems. Microsoft supplies the Hyper-V Guest Services as one solution.

The Isolation Model

Visually, Hyper-V’s isolation model looks something like this:

Hyper-V's isolation model

The management operating system loads the hardware drivers that Hyper-V utilizes, but otherwise, stays separated. Hyper-V keeps the management operating system and virtual machines “partitioned” from itself and each other. Of course, 100% isolation is neither possible nor practical. So, Hyper-V provides several interfaces for the virtual machines to use. VMBus manages most of these interfaces. As a “bus”, it carries requests and responses between Hyper-V and the guests.

The available interfaces in Hyper-V 2016 (that I know of):

  • Emulated and synthetic hardware, SR-IOV, and Discrete Device Access
  • Automatic virtual machine activation (Datacenter Edition only)
  • PowerShell Direct
  • Service monitoring (Failover Clustering only)
  • Integration services
    • Operating system shutdown
    • Time synchronization
    • Data Exchange
    • Heartbeat
    • Backup (volume shadow copy)
    • Guest services
  • Virtual Trusted Platform Module (vTPM)
  • Hyper-V Sockets

This article focuses on the integration services.

An Overview of the Integration Services

Each integration service provides a specific function to the virtual machine. For most, you can figure out its function just by looking at its name. The functions are projected into each virtual machine, but a matching software construct must exist and be enabled within the guest operating system.

For Windows guests, Microsoft provides a set of standard Windows services.

  • Hyper-V Data Exchange Service: aligns with Data Exchange
  • Hyper-V Guest Service Interface: aligns with Guest services
  • Hyper-V Guest Shutdown Service: aligns with Operating system shutdown
  • Hyper-V Time Synchronization Service: aligns with Time synchronization
  • Hyper-V Volume Shadow Copy Requestor: aligns with Backup (volume shadow copy)

Microsoft has distributed these as core components of its operating systems since Windows Vista/Windows Server 2008. Up through Windows Server and Hyper-V Server 2012 R2, every server maintained a local copy of the integration services and conveniently packaged them as vmguest.iso at each reboot. You can find the files in the host’s “Windowsvmguest” folder and the packaged ISO in “WindowsSystem32”.

Starting with Windows 10/Windows Server 2016, these files no longer exist on the host.

Windows Services that are Not Part of the Hyper-V Integration Services

You may see additional services that include Hyper-V or something that implies Hyper-V. Other than the ones that I listed, none are part of the official Hyper-V 2016 Integration Services stack that we’re talking about in this article.

Those services:

  • HV Host Service: Rather than re-invent the wheel, I’ll just paste the perfectly adequate description text right from the service: “Provides an interface for the Hyper-V hypervisor to provide per-partition performance counters to the host operating system.”
  • Hyper-V Host Compute Service: You will find this service on Hyper-V 2016 hosts, including Windows 10 and nested Hyper-V instances. It provides an API for container systems.
  • Hyper-V Remote Desktop Virtualization Service: I have no idea what this service does. It’s always running on my 2016 guests but not on my 2012 R2 guests. My first thought was that it enables Enhanced Session Mode from within the guest. So I stopped it. I had no trouble using Enhanced Session mode after that. It may be for RemoteFx.
  • Hyper-V Virtual Machine Management: This is the familiar VMMS that has been with us since the beginning. In previous versions, it was just called “Virtual Machine Management”. It runs on all Hyper-V hosts and enables VM control functionality (start, stop, etc.).

Integration Services Version Compatibility

Even though the host-side projections exist within every virtual machine that it creates, not every guest operating system has access to the matching client-side service. As an example, the Hyper-V Guest Service Interface cannot be installed into any guest prior to Windows 8.1/Windows Server 2012 R2. Therefore, that service will not be available on earlier operating systems.

Also, the host’s version rules over all guests. If you install a later version on a guest and then move it to an older host, you might have some difficulties.

Installing/Updating Hyper-V Guest Services on Windows

As previously mentioned, all currently supported versions of Windows and Windows Server ship with the guest services already installed. The process for updating them has changed since its inception, however.

For Windows XP/Windows Server 2003 up through Windows 8.1/Windows Server 2012 R2 while running on any Hyper-V host version from 2008 through 2012 R2, you would install/update by inserting the host’s vmguest.iso in the guest. The Hyper-V Virtual Machine connect window included an action:

Integration Services Setup Disk

Any time that Microsoft released an update for the integration services via Windows Update, only the host would be automatically updated. The guests could only be updated via a manual process. You could use the action shown above, manually insert the vmguest.iso, or even manually transfer the install files into the guest.

Now, the client packages for the integration services are also delivered via Windows Update. Simply keep your guests up-to-date and you will no longer need to worry about this.

Note 1: I am not certain what operating systems are included. I know that W8.1/WS2012 R2 and W10/WS2016 receive updates this way. I am not certain about earlier versions. If you know, please tell me.

Note 2: For older operating systems, remember that the host version rules. Meaning, do not attempt to install the files from a 2012 R2 host into a guest running on Hyper-V Server 2008.

Note 3: You’ll face some challenges for unsupported legacy operating systems such as Windows XP. You’ll need to find the newest versions that will still install by trying each of the server versions in turn. Do not expect perfect results.

Installing/Updating Hyper-V Guest Services on Linux

The core Hyper-V guest services have been natively built into the Linux kernel for quite some time. If you use a recent, mainstream distribution, then you’ve already got most of the services. They will automatically be updated anytime you update or upgrade your kernel.

If you wish to update the components separately or on a system with an older kernel, Microsoft makes the Linux Integration Services available for download. As of this writing, the current release version is 4.2. The lis-next GitHub project provides the source for feature backports to older versions. You can watch Microsoft’s official virtualization blog for notifications about new versions.

Not all features are included by default with the kernel-embedded integration services, notably Data Exchange and file copy (the only part of Guest services that Linux supports). Start on Microsoft’s official Linux on Hyper-V page. On the left or at the link list at the bottom, find your distribution’s page. Follow the directions to enable the additional features on your distribution.

Details on the Hyper-V Integration Services

Each service provides specific functionality, described below.

The Hyper-V Operating System Shutdown Service

Hyper-V’s Operating System Shutdown Service communicates with the matching service in the guest operating system to gracefully shut down the guest operating system. Without it, you can only force stop the virtual machine from Hyper-V.

The Hyper-V Time Synchronization Service

Hyper-V’s Time Synchronization Service keeps the guest’s time inline with the host’s. It respects any time zone differences. Do not use it for virtualized domain controllers, especially one that holds the PDC emulator role. Otherwise, it’s generally a good idea to leave it enabled for all guests.

The Hyper-V Data Exchange Service

Hyper-V uses key-value pairs to transmit unformatted information between the guest and the host. On Windows guests, the service operates through the guest’s registry. On Linux, the service operates through specially formatted files. On the host side, the Hyper-V WMI API facilitates key exchange for all guests. I have an article series that explains all of this and provides some plumbing to make it easy to use.

The Hyper-V Heartbeat Service

Due to isolation, the hypervisor can only directly sense if a virtual machine is in a powered on state. Beyond that, whatever happens inside the VM stays inside the VM. The heartbeat service is a simple sort of up/down tool. If the service can respond to the hypervisor, then the guest operating system is running at least well enough to start services. There may be some more detailed checks included, but that’s the basic summary of this service.

The Hyper-V Backup (volume shadow copy) Service

The volume shadow copy service (usually known as VSS) is a mechanism that Windows uses to facilitate crash- and application-consistent backups without stopping a computer. It functions by quiescing I/O and notifying applications that a backup will be occurring. Virtual machines present a problem if you want to back them up from the host. Even though Hyper-V can pause processing for a virtual machine, it can’t directly quiesce the guest operating system. So, its backup integration service operates as an intermediary.

When backup begins on the host, it can use this integration service to notify the VSS service within the guest. On Linux, the integration services uses a similar mechanism, although it can only quiesce I/O.

In older versions of Hyper-V, virtual machine backup used the standard VSS mechanism in the host. In modern versions, it relies on Hyper-V’s checkpointing mechanism instead. However, in-guest operations are still controlled by VSS.

Hyper-V Guest Services

I really wish that Microsoft had come up with a better name than just Guest services. Without context, it’s tough to realize that this is not a synonym for the overall category of Hyper-V integration services. The Guest services Hyper-V service allows you to copy files directly between the host and the guest.

How to Manipulate Integration Services in PowerShell

You should not manipulate services from within the guest. Use PowerShell or the GUI at the Hyper-V level. It will automatically handle service status within the guest.

Be aware that you can’t force a feature to work if the guest’s integration services don’t support it. For instance, you can’t enable the Guest Service for a Windows 2008 R2 guest and then copy a file into it.

Retrieve Integration Services in PowerShell

To see the guest services and their statuses for a virtual machine, use Get-VMIntegrationService.

Ex:
Get-VMIntegrationService -VMName svdc1

gs_getvmi

From the output, you can see if a service is enabled and, if it is, its current status. Note that disabled services will always show a primary status of OK. This particular VM shows a nice example for secondary statuses as well. For the Heartbeat service, you can see that its monitored indicators are all OK. For VSS, it wants to tell us something about the component version, but we can’t read it all here.

To expand a secondary status description, specify the Name of the integration service to check, enclose the whole thing in parenthesis, and use the dot selector to pick the SecondaryStatusDescription property. Ex:
(Get-VMIntegrationService -VMName svdc1 -Name ‘VSS’).SecondaryStatusDescription.

gs_expandsecondary

Basically, the integration service is out of date, so I might have problems backing this one up.

Enable Integration Services in PowerShell

Enable an integration service with Enable-VMIntegrationService.

Ex:
Enable-VMIntegrationService -VMName svdc1 -Name ‘Guest Service Interface’.

It will help if you run Get-VMIntegrationService first so that you can copy/paste the name. You can also use the pipeline to send objects from Get-VMIntegrationService to Enable-VMIntegrationService.

Disable Integration Services in PowerShell

Disable an integration service with Disable-VMIntegrationService.

Ex:
Disable-VMIntegrationService -VMName svdc1 -Name ‘Guest Service Interface’.

It will help if you run Get-VMIntegrationService first so that you can copy/paste the name. You can also use the pipeline to send objects from Get-VMIntegrationService to Disable-VMIntegrationService.

How to Manipulate Integration Services in Hyper-V Manager

In Hyper-V Manager, you can determine if any given service is enabled. It will show the status of the Heartbeat service, but no others. Failover Cluster Manager uses the same dialog to show enabled/disabled state for services but does not show status for any of them.

On the virtual machine’s Settings dialog, look on the Integration Services tab. A service is enabled if it is checked. Be aware that checking a box for a service that the client does not support has no effect. For instance, you can check the Guest service box for a 2008 R2 guest, but you will not be able to copy files to/from it.

Integration Services for Hyper-V 2016 Management

To check the Heartbeat status, highlight the VM in Hyper-V Manager’s Virtual Machines pane. In the lower section, ensure that you are on the Summary tab. Look for the Heartbeat field:

Hyper-V Manager 2016

Leveraging Integration Services in Automation

I think that you can easily determine uses for these services, so I won’t throw a lot of silly ideas at you.

I would recommend that you find a way to incorporate regular checks of the Heartbeat service into your monitoring system. If the secondary status doesn’t return OK, there’s a solid chance that the virtual machine has crashed.

Look around a bit, and you might find other uses.

Have any questions or feedback?

Leave a comment below!


Hyper-V 2016 Host Mode: GUI vs Core

26 Sep 2017 by Eric Siron
   
0    
Hyper-V Articles

Choice is a good thing, right? Well… usually. Sometimes, choice is just confusing. With most hypervisors, you get what you get. With Hyper-V, you can install in three different ways, and that’s just for the server hypervisor. In this article, we’ll balance the pros and cons of your options with the 2016 SKUs.

Server Deployment Options for Hyper-V

As of today, you can deploy Hyper-V in one of four packages.

Nano Server

When 2016 initially released, it brought a completely new install mode called “Nano”. Nano is little more than the Windows Server kernel with a tiny handful of interface bits attached. You then plug in the roles and features that you need to get to the server deployment that you want. I was not ever particularly fond of the idea of Hyper-V on Nano for several reasons, but none of them matter now. Nano Server is no longer supported as a Hyper-V host. It currently works, but that capability will be removed in the next iteration. Part of the fine print about Nano that no one reads includes the requirement that you keep within a few updates of current. So, you will be able to run Hyper-V on Nano for a while, but not forever.

If you currently use Nano for Hyper-V, I would start plotting a migration strategy now. If you are considering Nano for Hyper-V, stop.

Hyper-V Server

Hyper-V Server is the product name given to the free distribution vehicle for Hyper-V. You’ll commonly hear it referred to as “Hyper-V Core”, although that designation is both confusing and incorrect. You can download Hyper-V Server as a so-called “evaluation”, but it never expires.

A word of advice: Hyper-V Server includes a legally-binding license agreement. Violation of that licensing agreement subjects you to the same legal penalties that you would face for violating the license agreement of a paid operating system. Hyper-V Server’s license clearly dictates that it can only be used to host and maintain virtual machines. You cannot use it as a file server or a web server or anything else. Something that I need to make extremely clear: the license agreement does not provide special allowances for a test environment. I know of a couple of blog articles that guide you to doing things under the guise of “test environment”. That’s not OK. If it’s not legal in a production environment, it doesn’t magically become legal in a test environment.

Windows Server Core

When you boot to the Windows Server install media, the first listed option includes “Core” in the name. That’s not an accident; Microsoft wants you to use Core mode by default. Windows Server Core excludes the primary Windows graphical interface and explorer.exe. Some people erroneously believe that means that no graphical applications can be run at all. Applications that use the Explorer rendering engine will not function (such as MMC), but the base Windows Forms libraries and mechanisms exist.

Windows Server with GUI

I doubt that the GUI mode of Windows Server needs much explanation. You have the same basic graphical interface as Windows 10 with some modifications that make it more appropriate for a server environment. When you install from 2016 media, you will see this listed as (Desktop Experience).

The Pros and Cons of the Command-line and Graphical Modes for Hyper-V

I know that things would be easier if I would just tell you what to do. If I knew you and knew your environment, I might do that. I prefer giving you the tools and knowledge to make decisions like this on your own, though. So, we’ll complement our discussion with a pros and cons list of each option. After the lists, I’ll cover some additional guidelines and points to consider.

Hyper-V Server Pros and Cons

If you skipped the preamble, remember that “Hyper-V Server” refers to the completely free SKU that you can download at any time.

Pros of Hyper-V Server:

  • Never requires a licensing fee
  • Never requires activation
  • Smallest deployment
  • Smallest “surface area” for attacks
  • Least memory usage by the management operating system
  • Fewest patch needs
  • Includes all essential features for running Hyper-V (present, not necessarily enabled by default):
    • Hyper-V hypervisor
    • Hyper-V PowerShell interface
    • Cluster membership
    • Domain membership
    • Hyper-V Replica membership
    • Remote Desktop Virtual Host role for VDI deployments
    • RemoteFx (automatic with RDVH role)

Cons of Hyper-V Server:

  • Cannot provide Automatic Virtual Machine Activation
  • Cannot provide deduplication features
  • Impossible to enable the Windows Server GUI
  • Software manufacturers may refuse to support their software on it
  • Third-party support operations, such as independent consulting firms, may not have any experience with it
  • Switching to Windows Server requires a complete reinstall
  • Difficult to manage hardware

Hyper-V in Windows Server Core Pros and Cons

If you’ve seen the term “Hyper-V Core”, that probably means “Hyper-V Server”. This section covers the authentic Windows Server product installed in Core mode.

Pros of Windows Server Core for Hyper-V:

  • Microsoft recommends Windows Server Core for Hyper-V
  • Receives feature updates on the quickest schedule (look toward the bottom of the link in the preceding bullet)
  • Comparable deployment size to Hyper-V Server
  • Comparable surface area to Hyper-V Server
  • Comparable memory usage to Hyper-V Server
  • Comparable patch requirements to Hyper-V Server
  • Allows almost all roles and features of Windows Server
  • Can provide Automatic Activation for Windows Server in VMs (Datacenter Edition only)

Cons of Windows Server Core for Hyper-V:

  • Impossible to enable the Windows Server GUI
  • Must be licensed and activated
  • Upgrading to the next version requires paying for that version’s license, even if you will wait to deploy newer guests
  • Software manufacturers may refuse to support their software on it
  • Third-party support operations, such as independent consulting firms, may not have any experience with it
  • Difficult to manage hardware

Hyper-V in Windows Server GUI Pros and Cons

We saved what many consider the “default” option for last.

Pros of Windows Server with GUI for Hyper-V:

  • Familiar Windows GUI
  • More tools available, both native and third party
  • Widest support from software manufacturers and consultants
  • Easiest hardware management
  • Valid environment for all Windows Server roles, features, and software
  • Can provide Automatic Activation for Windows Server in VMs (Datacenter Edition only)

Cons of Windows Server with GUI for Hyper-V:

  • Familiarity breeds contempt
  • Slowest feature roll-out cycle (see the bottom of this article)
  • Largest attack surface, especially with explorer.exe
  • Largest deployment size
  • Largest memory usage
  • Largest patch requirements
  • Must be licensed and activated
  • Upgrading to the next version requires paying for that version’s license, even if you will wait to deploy newer guests

Side-by-Side Comparison of Server Modes for Hyper-V

Two items appear in every discussion of this topic: disk space and memory usage. I thought that it might be enlightening to see the real numbers. So, I built three virtual machines running Hyper-V in nested mode. The first contains Hyper-V Server, the second contains Windows Server Datacenter Edition in Core mode, and the third contains Windows Server Datacenter Edition in GUI mode. I have enabled Hyper-V in each of the Windows Server systems and included all management tools and subfeatures (
Add-WindowsFeature -Name Hyper-V -IncludeAllSubFeature -IncludeManagementTools). All came from the latest MSDN ISOs. None are patched. None are on the network.

Disk Usage Comparison of the Three Modes

I used the following PowerShell command to determine the used space:
‘{0:N0}’ -f (Get-WmiObject -Class Win32_LogicalDisk | ? DeviceId -eq ‘C:’ | % {$_.Size $_.FreeSpace}).

Deployment Mode Used Disk Space (bytes)
Hyper-V Server 2016 6,044,270,592
Windows Server 2016 Datacenter Edition in Core mode 7,355,858,944
Windows Server 2016 Datacenter Edition in GUI mode 10,766,614,528

For shock value, the full GUI mode of Windows Server adds 78% space utilization above Hyper-V Server 2016 and 46% space utilization above Core mode. That additional space amounts to less than 5 gigabytes. If 5 gigabytes will make or break your deployment, you’ve got other issues.

Memory Usage Comparison of the Three Modes

We’ll start with Task Manager while logged on:

cvg_tmmemory

These show what we expect: Hyper-V Server uses the least amount of memory, Windows Server Core uses a bit more, and Windows Server with GUI uses a few ticks above both. However, I need to point out that these charts show a more dramatic difference than you should encounter in reality. Since I’m using nested VMs to host my sample systems, I only gave them 2 GB total memory apiece. The consumed memory distance between Hyper-V Server and Windows Server with GUI weighs in at a whopping .3 gigabytes. If that number means a lot to you in your production systems, then you’re going to have other problems.

But that’s not the whole story.

Those numbers were taken from Task Manager while logged on to the systems. Good administrators log off of servers as soon as possible. What happens, then, when we log off? To test that, I had to connect each VM to the network and join the domain. I then ran:
Get-WmiObject Win32_OperatingSystem | select FreePhysicalMemory with the ComputerName switch against each of the hosts. Check out the results:

Deployment Mode Free Memory (MB)
Hyper-V Server 2016 1,621,148
Windows Server 2016 Datacenter Edition in Core mode 1,643,060
Windows Server 2016 Datacenter Edition in GUI mode 1,558,744

Those differences aren’t so dramatic, are they? Windows Server Core even has a fair bit more free memory than Hyper-V Server… at that exact moment in time. If you don’t have much background in memory management, especially in terms of operating systems, then keep in mind that memory allocation and usage can seem very strange.

The takeaway: memory usage between all three modes is comparable when they are logged off.

Hyper-V and the “Surface Area” Argument

Look at the difference in consumed disk sizes between the three modes. Those extra bits represent additional available functionality. Within them, you’ll find things such as Active Directory Domain Services and IIS. So, when we talk about choosing between these modes, we commonly point out that all of these things add to the “attack surface”. We try to draw the conclusion that using a GUI-less system increases security.

First part: Let’s say that a chunk of malware injects itself into one of the ADDS DLLs sitting on your Windows Server host running Hyper-V. What happens if you never enable ADDS on that system? Well, it’s infected, to be sure. But, in order for any piece of malware to cause any harm, something eventually needs to bring it into memory and execute it. But, you know that you’re not supposed to run ADDS on a Hyper-V host. Philosophical question: if malware attacks a file and no one ever loads it, is the system still infected? Hopefully, you’ve got a decent antimalware system that will eventually catch and clean it, so you should be perfectly fine.

On one hand, I don’t want to downplay malware. I would never be comfortable with any level of infection on any system. On the other hand, I think common sense host management drastically mitigates any concerns. I don’t believe this is enough of a problem to carry a meaningful amount of weight in your decision.

Second part: Windows Server runs explorer.exe as its shell and includes Internet Explorer. Attackers love those targets. You can minimize your exposure by, you know, not browsing the Internet from a server, but you can’t realistically avoid using explorer.exe on a GUI system. However, as an infrastructure system, you should be able to safely instruct your antimalware system to keep a very close eye on Explorer’s behavior and practice solid defensive techniques to prevent malware from reaching the system.

Overall takeway from this section: Explorer presents the greatest risk. Choose the defense-in-depth approach of using Hyper-V Server or Windows Server Core, or choose to depend on antimalware and safe operating techniques with the Windows Server GUI.

Hyper-V and the Patch Frequency Non-Issue

Another thing that we always try to bring into these discussions is the effect of monthly patch cycles. Windows Server has more going on than Hyper-V Server, so it gets more patches. From there, we often make the argument that more patches equals more reboots.

A little problem, though. Let’s say that Microsoft releases twelve patches for Windows Server and only two apply to Hyper-V Server. One of those two patches requires a reboot. In that case, both servers will reboot. One time. So, if we get hung up on downtime over patches, then we gain nothing. I believe that, in previous versions, the downtime math did favor Hyper-V Server a few times. However, patches are now delivered in only a few omnibus packages instead of smaller targeted patches. So, I suspect that we will no longer be able to even talk about reboot frequency.

One part of the patching argument remains: with less to patch, fewer things can go wrong from a bad patch. However, this argument faces the same problem as the “surface area” non-issue. What are you using on your Windows Server system that you wouldn’t also use on a Hyper-V Server system? If you’re using your Windows Server deployment correctly, then your patch risks should be roughly identical.

Most small businesses will patch their Hyper-V systems via automated processes that occur when no one is around. Larger businesses will cluster Hyper-V hosts and allow Cluster Aware Updating to prevent downtime.

Overall takeaway from this section: patching does not make a convincing argument in any direction.

Discussions: Choosing Between Core and GUI for Hyper-V

Now you’ve seen the facts. You’ve seen a few generic arguments for the impact level of two of those facts. If you still don’t know what to do, that’s OK. Let’s look at some situational points.

A Clear Case for Hyper-V on Windows Server Full GUI

If you’re in a small environment with only a single physical server, go ahead and use the full GUI.

Why? Some reasons:

  • It is not feasible to manage Hyper-V without any GUI at all. I advocate for PowerShell usage as strongly as anyone else, but sometimes the GUI is a better choice. In a multi-server environment, you can easily make a GUI-less system work because you have at least one GUI-based management system somewhere. Without that, GUI-less demands too much.
  • The world has a shortage of Windows Server administrators that are willing and able to manage a GUI-less system. You will have difficulty hiring competent help at a palatable price.
  • Such a small shop will not face the density problems that justify the few extra resources saved by the GUI-less modes.
  • The other issues that I mentioned are typically easier to manage in a small environment than in a large environment.
  • A GUI system will lag behind Core in features, but Hyper-V is quite feature-complete for smaller businesses. You probably won’t miss anything that really matters to you.
  • If you try Hyper-V Server or Windows Server Core and decide that you made a mistake, you have no choice but to reinstall. If you install the GUI and don’t want to use it, then don’t use it — switch to remote management practices. You won’t miss out on anything besides the faster feature release cycle.

We can make some very good arguments for a GUI-less system, but none are strong enough to cause crippling pain for a small business. When the GUI fits, use it.

A Clear Case for Hyper-V Server

Let’s switch gears completely. Let’s say that:

  • You’re a PowerShell whiz
  • You’re a cmd whiz
  • You run a lot of Linux servers
  • Your Windows Servers (if any) are all temporary testing systems

Hyper-V Server will suit you quite well.

Everyone Else

If you’re somewhere in the middle of the above two cases, I think that Microsoft’s recommendation of Windows Server Core with Hyper-V fits perfectly. The parts that stand out to me:

  • Flexibility: Deduplication has done such wonders for me in VDI that I’m anxious to see how I can apply it to server loads. In 2012 R2, server guests were specifically excluded; VDI only. Server 2016 maintains the same wording in the feature setup, but I can’t find a comparable statement saying that server usage is verboten in 2016. I could also see a case for building a nice VM management system in ASP.Net and hosting it locally with IIS — you can’t do that in Hyper-V Server.
  • Automatic Virtual Machine Activation. Who loves activation? Nobody loves activation! Let the system deal with that.
  • Security by terror: Not all server admins are created equally. I find that the really incompetent ones won’t even log on to a Server Core/Hyper-V Server system. That means that they won’t put them at risk.
  • Remote management should be the default behavior. If you don’t currently practice remote management, there’s no time like the present! You can dramatically reduce the security risk to any system by never logging on to its console, even by RDP.

You can manage Hyper-V systems from a Windows 10 desktop with RSAT. It’s not entirely without pain, though:

  • Drivers! Ouch! Microsoft could help us out by providing a good way to use Device Manager remotely. We should not let driver manufacturers off the hook easily, though. Note: Honolulu is coming to reduce some of that pain.
  • Not everyone that requires the GUI is an idiot. Some of them just haven’t yet learned. Some have learned their way around PowerShell but don’t know how to use it for Hyper-V. You like taking vacations sometimes, don’t you?
  • Crisis mode when you don’t know what’s wrong can be a challenge. It’s one thing to keep the top spinning; it’s another to get it going when you can’t see what’s holding it down. However, these problems have solutions. It’s a pain, but a manageable one.

I’m not here to make the decision for you. You now have enough information to make an informed decision.

Have any questions or feedback?

Leave a comment below!


Project ‘Honolulu’: What you need to know

19 Sep 2017 by Eric Siron
   
0    
Windows Server

The biggest problem with Hyper-V isn’t Hyper-V at all. It’s the management experience. We’ve all had our complaints about that, so I don’t think a rehash is necessary. Thing is, Hyper-V is far from alone. Microsoft has plenty of management issues across its other infrastructure roles and features as well. Enter Project ‘Honolulu’: an attempt to unify and improve the management experience for Microsoft’s infrastructure offerings.

Before I get very far into this, I want one thing to be made abundantly clear: the Honolulu Project is barely out of its infancy. As I write this, it is exiting private preview. The public beta bits aren’t even published yet.

With that said, unless many things change dramatically between now and release, this is not the Hyper-V management solution that you have been waiting for. At its best, it has a couple of nice touches. In a few cases, it is roughly equivalent to what we have now. For most things, it is worse than what we have available today. I hate to be so blunt about it because I believe that Microsoft has put a great deal of effort into Honolulu. However, I also feel like they haven’t been paying much attention to the complaints and suggestions the community has made regarding the awful state of Hyper-V management tools.

What is Project ‘Honolulu’

When you look at Honolulu, it will appear something like an Azure-ified Server Manager. It adopts the right-to-left layouts that the Azure tools use, as opposed to the up-and-down scrolling that we humans and our mice are accustomed to.

Thou shalt not use Honolulu in a window
Thou shalt not use Honolulu in a window

This sort of thing is normative for the Azure tools. If you have a 50″ 4k screen and nothing else to look at, I’m sure that it looks wonderful. If you are using VMConnect or one of those lower resolution slide-out monitors that are still common in datacenters, then you might not enjoy the experience. And yes, the “<” icon next to Tools means that you can collapse that panel entirely. It doesn’t help much. I don’t know when it became passé for columns to be resizable and removable. Columns should be resizable and removable.

As you see it in that screenshot, Honolulu is running locally. It can also run in a gateway mode on a server. You can then access it from a web browser from other systems and devices.

Requirements for Running Project ‘Honolulu’

For the Honolulu Project itself, you can install on:

  • Windows 10
  • Windows Server 2012 through 2016

On a Windows 10 desktop or a Server 2012 system, it will only be accessible locally.

If you install on a server 2012 R2 through 2016 SKU, it will operate in the aforementioned gateway mode. You just open a web browser to that system on whatever port you configure, ex: https://managementmentsystem:6516. You will be prompted for credentials.

When you provide credentials to Honolulu, the systems that you connect to will be associated with your account. If you connect to Honolulu with a different user account, it will not display any of the servers that were chosen under a different account. Each need to be set up separately. You can import lists to reduce the pain.

Note: As it stands right now, I cannot get Honolulu work on a 2012 R2 system. It will open, but then refuses to connect to any server in my organization. I am actively working on this problem and will report back if a solution can be found. That’s one of the dangers of using early software, not a lifelong condemnation of the product.

Requirements for Targets of Honolulu

The target system(s) must be a Server SKU 2012 through 2016. It/they must have Windows Management Framework 5 or higher loaded. The easiest way to tell is by opening a PowerShell prompt and running
$PSVersionTable. The PowerShell version and the Windows Management Framework version will always be the same. It also helps if you can verify that you can connect from the management system to the target with
Enter-PSSession.

The following screenshot shows an example. I first tested that my management system has the correct version. Then I connected to my target and checked the WMF version there. I should have no problems setting up the first system to run Project Honolulu to connect to the second system.

running project honolulu

If you are running all of the systems in the same domain, then this will all “just work”. I’m not sure yet how cross-domain authentication works. If you’ve decided that security is unimportant and you’re running your Hyper-V host(s) in workgroup mode, then you will need to swing the door wide open to attackers by configuring TrustedHosts on the target system(s) to trust any computer that claims to have the name of your Honolulu system.

Requirements for Viewing Project ‘Honolulu’

Honolulu presents its views via HTML 5 web pages. Edge and Chrome work well. Internet Explorer doesn’t work at all:

hono_noie

I think it will be interesting to see how that plays out in the enterprise. Windows 10 isn’t exactly the best corporate player, so several organizations are hanging on to Windows 7. Others are moving to Windows 10, but opting for the Long-Term Servicing Branch (LTSB). LTSB doesn’t include Edge. So, is Microsoft (inadvertently?) pushing people toward Google Chrome?

Connecting to a Target Server in Honolulu

When you first start up Honolulu, you have little to look at:

hono_lonely

Click the + Add link to get started adding systems. Warning: If you’re going to add clusters, do that following the instructions in the next section. Only follow this for stand-alone hosts.

Type the name of a system to connect to, and it will automatically start searching. Hopefully, it will find the target. You can click the Submit button whether it can find it or not.

A working system:

hono_goodaddserver

A non-working system:

hono_broken

As you can see in the links, you can also Import Servers. For this, you need to supply a text file that contains a list of target servers.

Connecting to a Target Cluster in Honolulu

Honolulu starts out in “Server Manager” mode, so it will only connect to servers. If you try to connect it to a failover cluster in Server Manager mode, it will pick up the owning node instead. In order to connect to a failover cluster, you need to switch the mode.

At the top of the window, find the Server Manager heading. Drop that down and select Failover Cluster Manager.

hono_fcm

Now, add clusters with the + Add button. When it detects the cluster, it will also prompt you to add the nodes as members of Server Manager:

hono_addcluster

Windows Management Framework Error for Honolulu

As mentioned in the beginning, every target system needs to have at least Windows Management Framework version 5 installed. If a target system does not meet that requirement, Honolulu will display that status:

hono_wmf

The Really Quick Tour for Honolulu

I focus on Hyper-V and I’m certain that dozens of other Honolulu articles are already published (if not more). So, let’s burn through the non-Hyper-V stuff really fast.

Right-click doesn’t do anything useful anywhere in Honolulu. Train yourself to use only the left mouse button.

Server Manager has these sections:

  • Overview: Shows many of the things that you can see in Computer Properties. Also has several real-time performance charts, such as CPU and memory. For 2016+ you can see disk statistics. I like this page in theory, but the execution is awful. It assumes that you always want to see the basic facts about a host no matter what and that you have a gigantic screen resolution. My VMConnect screen is set to 1366×768 and I can’t even see a single performance chart in its entirety:
    hono_overview
  • Certificates: No more dealing with all the drama of manually adding the certificates snap-in! Also, you can view the computer and user certificates at the same time! Unfortunately, it doesn’t look like you can request a new certificate, but most other functionality seems to be here.
  • Devices: You can now finally see the devices installed on a Server Core/Hyper-V Server installation. You can’t take any action except Disable, unfortunately. It’s still better than what we had.
  • Events: Event Viewer, basically.
  • Files: Mini-File Explorer in your browser! You can browse the directory structure and upload/download files. You can view properties, but you can’t do anything with shares or permissions.
  • Firewall: Covers the most vital parts of firewall settings (profile en/disabling and rule definitions).
  • Local Users and Groups: Add and remove local user accounts. Add them to or remove them from groups. You cannot add or delete local groups. Adding a user to a group is completely free-text; no browsing. Also, if you attempt to add a user that doesn’t exist, you get a confirmation message that tells you that it worked, but the field doesn’t populate.
  • Network: View the network connections and set basic options for IPv4 and IPv6.
  • Processes: Mostly like Task Manager. Has an option to Create Process Dump.
  • Registry: Nifty registry editor; includes Export and Import functions. Very slow, though; personally I’d probably give up and use regedit.exe for as long as I’m given a choice.
  • Roles and Features: Mostly what you expect. No option for alternate install sources, though, so you won’t be using it to install .Net 3.5. Also, I can’t tell how to discard accidental changes. No big deal if you only accidentally checked a single item. For some reason, clicking anywhere on a line toggles the checked/not checked state, so you can easily change something without realizing that you did it.
  • Services: Interface for installed services. Does not grant access to any advanced settings for a service (like the extra tabs on the SNMP Service). Also does not recognize the Delayed Start modifier for Automatic services. I would take care to only use this for Start and Stop functions.
  • Storage: Works like the Storage part of the Files and Storage Services section in Server Manager. Like the preceding sections, includes most of the same features as its real Server Manager counterpart, but not all.
  • Storage Replica: I’m not using Storage Replica anywhere so I couldn’t gauge this one. Requires a special setup.
  • Virtual Machines and Virtual Switches: These two sections will get more explanation later.
  • Windows Update: Another self-explanatory section. This one has most of the same functionality as its desktop counterpart, although it has major usability issues on smaller screens. The update list is forced to yield space to the restart scheduler, which consumes far more screen real estate than it needs to do its job.

Virtual Switches in Honolulu

Alphabetically, this comes after Virtual Machines, but I want to get it out of the way first.

The Virtual Switches section in Project ‘Honolulu’ mostly mimics the virtual switch interface in Hyper-V Manager. So, it gets props for being familiar. It takes major dings for duplicating Hyper-V Manager’s bad habits.

First, the view:

hono_virtualswitchoverview

Functionality:

  • New Virtual Switch
  • Delete Virtual Switch
  • Rename Virtual Switch
  • Modify some settings of a virtual switch

The Settings page (which I had to stitch together because it successfully achieves the overall goal of wasting maximal space):

hono_vswitchsettings

The New Virtual Switch screen looks almost identical, except that it’s in a sidebar so it’s not quite as wide.

Notes on Honolulu’s virtual switch page:

  • Copies Hyper-V Manager’s usage of the adapter’s cryptic Description field instead of its name field.
  • If you look in the Network Adapter setting on the Settings for vSwitch screenshot and then compare it to the overview screen shot, you should notice something: It didn’t pick the team adapter that I really have my vSwitch on. Also, you can’t choose the team adapter. I didn’t tinker with that because I didn’t want to break my otherwise functional system, but not being able to connect a virtual switch to a team is a non-starter for me.
  • Continues to use the incorrect and misleading “Share” terminology for “Shared with Management OS” and “Allow management OS to share this network adapter”. Hey Microsoft, how hard would it really be to modify those to say “Used by Management OS” and “Allow management OS to use this virtual switch”?
  • No VLAN settings.
  • No SR-IOV settings.
  • No Switch-Embedded Teaming settings
  • No options for controlling management OS virtual NICs beyond the first one

Virtual Machines in Honolulu

All right, this is why we’re here! Make sure that you’re over something soft or the let-down might sting.

Virtual Machine Overview

The overview is my favorite part, although it also manifests the wasteful space usage that plagues this entire tool. Even on a larger resolution, it’s poorly made. However, I like the information that it displays, even if you need to scroll a lot to see it all.

At the top, you get a quick VM count and a recap of recent events:

hono_vmov

Even though I like the events being present, that tiny list will be mostly useless on an environment of any size. Also, it might cause undue alarm. For instance, those errors that you see mean that Dynamic Memory couldn’t expand any more because the VMs had reached their configured maximum. You can’t see that here because it needs two inches of whitespace padding to its left and right.

You can also see the Inventory link. We’ll come back to that after the host resources section.

Virtual Machine Host Resource Usage

I mostly like the resource view. Even on my 1366×768 VMConnect window, I have enough room to fit the CPU and memory charts side-by-side. But, they’re stacked and impossible to see together. I’ve stitched the display for you to see what it could look like with a lot of screen to throw at it:

hono_hostresources

Virtual Machine Inventory

Back at the top of the Virtual Machines page, you can find the Inventory link. That switches to a page where you can see all of the virtual machines:

hono_vminventory

That doesn’t look so bad, right? My primary complaint with the layout is that I believe that the VM’s name should be prioritized. I’d rather have an idea of the VM’s name as opposed to the Heart Beat or Protected statuses, if given a choice.

My next complaint is that, even at 1366×768, which is absolutely a widescreen resolution, the elements have some overrun. If I pick a VM that’s on, I must be very careful when trying to access the More menu so that I don’t inadvertently Shutdown the guest instead:

hono_smoosh

What’s on that More menu? Here you go:

hono_more

That’s for a virtual machine that’s turned on. No, your eyes are not deceiving you. You cannot modify any of the settings of a virtual machine while it is running. Power states and checkpoints are the limit.

I don’t know what Protected means. It’s not about being shielded or clustered. I suppose it means that it’s being backed up to Azure? If you’re not using Azure backup then this field just wastes even more space.

Virtual Machine Settings

If you select a virtual machine that’s off, you can then modify its settings. I elected not to take all of those screenshots. Fitting with the general Honolulu motif, they waste a great deal of space and present less information than Hyper-V Manager. These setting groupings are available:

  • General: The VM’s name, notes, automatic start action, automatic stop action, and automatic critical state action
  • Memory: Startup amount, Dynamic Memory settings, buffer, and weight
  • Processors: Number only. No NUMA, compatibility mode, reservation, or weight settings
  • Disks: I could not get the disks tab to load for any virtual machine on any host, whether 2012 R2 or 2016. It just shows the loading animation
  • Networks: Virtual switch connection, VLAN, MAC (including spoofing), and QoS. Nothing about VMQ, IOV, IPSec, DHCP Guard, Router Guard, Protected Network, Mirroring, Guest Teaming, or Consistent Device Naming
  • Boot Order: I could not get this to load for any virtual machine.

Other Missing Hyper-V Functionality in Honolulu

A criticism that we often level at Hyper-V Manager is just how many settings it excludes. If we only start from there, Project ‘Honolulu’ excludes even more.

Features available in Hyper-V Manager that Honolulu does not expose:

  • Hyper-V host settings — any of them. Live Migration adapters, Enhanced Session Mode, RemoteFX GPUs, and default file locations
  • No virtual SAN manager. Personally, I can live with that, since people need to stop using pass-through disks anyway. But, there are some other uses for this feature and it still works, so it makes the list of Honolulu’s missing features.
  • Secure boot
  • VM Shielding
  • Virtual TPM
  • Virtual hardware add/remove
  • Indication of VM Generation
  • Indication/upgrade of VM version
  • Shared Nothing Live Migration (intra-cluster Live Migration does work; see the Failover Clustering section below)
  • Storage (Live) Migration
  • Hyper-V Replica
  • Smart page file

Except for the automatic critical action setting, I did not find anything in Project ‘Honolulu’ that isn’t in Hyper-V Manager. So, don’t look here for nested VM settings or anything like that.

Failover Clustering for Hyper-V in Honolulu

Honolulu’s Failover Cluster Manager is even more of a letdown than Hyper-V. Most of the familiar tabs are there, but it’s almost exclusively read-only. However, we Hyper-V administrators get the best of what it can offer.

If you look on the Roles tab, you can find the Move action. That initiates a Quick or Live Migration:

hono_migration

Unfortunately, it forces you to pick a destination host. In a small cluster like mine, no big deal. In a big cluster, you’d probably like the benefit of the automatic selector. You can’t even see what the other nodes’ load levels look like to help you to decide.

Other nice features missing from Honolulu’s Failover Cluster Manager:

  • Assignment, naming, and prioritizing of networks
  • Node manipulation (add/evict)
  • Disk manipulation (add/remove cluster disk, promote/demote Cluster Shared Volume, CSV ownership change)
  • Quorum configuration
  • Core resource failover
  • Cluster validation. The report is already in HTML, so even if this tool can’t run validation, it would be really nice if it could display the results of one

Showstopping Hyper-V Issues in Project ‘Honolulu’

Pay attention to the dating of this article, as all things can change. As of this writing, these items prevent me from recommending Honolulu:

  • No settings changes for running virtual machines. The Hyper-V team has worked very hard to allow us to change more and more things while the virtual machine is running. Honolulu negates all of that work, and more.
  • No Hyper-V switch on a team member
  • No VMConnect (console access). If you try to connect to a VM, it uses RDP. I use a fair number of Linux guests. Microsoft has worked hard to make it easy for me to use Linux guests. For Windows guests, RDP session cuts out the pre-boot portions that we sometimes need to see.
  • No host configuration

Any or all of these things might change between now and release. I’ll be keeping up with this project in hopes of being able to change my recommendation.

The Future of Honolulu

I need to stress, again, that Honolulu is just a baby. Yes, it needs a lot of work. My general take on it, though, is that it’s beginning life by following in the footsteps of the traditional Server Manager. The good: it tries to consolidate features into a single pane of glass. The bad: it doesn’t include enough. Sure, you can use Server Manager/Honolulu to touch all of your roles and features. You can’t use it as the sole interface to manage any of them, though. As-is, it’s a decent overview tool, but not much more.

Where Honolulu goes from here is in all of our hands. I’m writing this article a bit before the project goes into public beta, so you’re probably reading it at some point afterward. Get the bits, set it up, and submit your feedback. Be critical, but be nice. Designing a functional GUI is hard. Designing a great GUI is excruciatingly difficult. Don’t make it worse with cruel criticism.

Have any questions or feedback?

Leave a comment below!


Why Your Hyper-V PowerShell Commands Don’t Work (and how to fix them)

14 Sep 2017 by Eric Siron
   
0    
Hyper-V & PowerShell

I occasionally receive questions about Hyper-V-related PowerShell cmdlets not working as expected. Sometimes these problems arise with the module that Microsoft provides; other times, they manifest with third-party tools. Even my own tools show these symptoms. Most GUI tools are developed to avoid the problems that plague the command line, but the solutions aren’t always perfect.

The WMI Foundation

All tools, graphical or command-line, eventually work their way back to the only external interface that Hyper-V provides: its WIM/CIM provider. CIM stands for “Common Information Model”. The Distributed Management Task Force (DMTF) maintains the CIM standard. CIM defines a number of interfaces pertaining to management. Anyone can write CIM-conforming modules to work with their systems. These modules allow users, applications, and services to retrieve information and/or send commands to the managed system. By leveraging CIM, software and hardware manufacturers can provide APIs and controls with predictable, standardized behavior.

Traditionally, Microsoft has implemented CIM via Windows Management Instrumentation (WMI). Many WMI instructions involved VBS or WMIC. As PowerShell gained popularity, WMI also gained popularity due to the relative ease of Get-WmiObject. Depending on where you look in Microsoft’s vast documentation, you might see pushes away from the Microsoft-specific WMI implementation toward the more standard CIM corollaries. Get-CimInstance provides something of an analog to Get-WmiObject, but they are not interchangeable.

For any of this to ever make any sense, you need to understand one thing: anyone can write a WMI provider. The object definitions and syntax of a provider all descend from the common standard, but the provider’s developer determines how it all functions behind the scenes.

Why Hyper-V PowerShell Cmdlets May Not Work

Beyond minor things like incorrect syntax and environmental things like failed hardware, two common reasons cause prevent these tools from functioning as expected.

The Hyper-V Security Model

I told you all that about WMI so that this part would be easier to follow. The developers behind the Hyper-V WMI provider decide how it will react to any given WMI/CIM command that it receives. Sometimes, it chooses to have no reaction at all.

Before I go too far, I want to make it clear that no documentation exists for the security model in Hyper-V’s WMI provider. I ran into some issues some time ago with WMI commands not working the way that I expected. I opened a case with Microsoft, and it wound up going all the way to the developers. The answer that came back pointed to the internal coding of the module. In other words, I was experiencing a side effect of designed behavior. So, I asked if they would give me the documentation on that — basically, anything on what caused that behavior. I was told that it doesn’t exist. They obviously don’t have any externally-facing documentation, but they don’t have anything internal, either. So, everything that you’re going to see in this article originates from experienced (and repeatable) behavior. No insider secrets or pilfered knowledge were used in the creation of this material.

Seeing Effects of the Hyper-V Security Model in Action

Think about any “Get” PowerShell cmdlet. What happens when you run a “Get” against objects that don’t exist? For example, what happens when I run Get-Job when no jobs are present?

psnowork_emptygetjobNothing! That’s what happens. You get nothing.

So, if I run Get-VM and get nothing (2012/R2):

psnowork_emptygetvm

That means that the host has no virtual machines, right?

But wait:

Hyper-V Powershell commands help

What happened? A surprise Live Migration?

Look at the title bars carefully. The session on the left was started normally. The session on the right was started by using Run as administrator.

The PowerShell behavior has changed in 2016:

psnowork_emptygetvm2016

The PowerShell cmdlets that I tried now show an appropriate error message. However, only the PowerShell module has been changed. The WMI provider behaves as it always has:

psnowork_wmigetvm2016

To clarify that messy output, I ran
gwmi -Namespace rootvirtualizationv2 -Class Msvm_ComputerSystem -Filter ‘Caption=”Virtual Machine”‘ as a non-privileged user and the system gave no output. That window overlaps another window that contains the output from Get-VM in an elevated session.

Understanding the Effects of the Hyper-V Security Model

When we don’t have permissions to do something, we expect that the system will alert us. If we try to open a file, we get a helpful error message explaining why the system can’t allow it. We’ve all experienced that enough times that we’ve been trained to expect a red flag. The Hyper-V WMI provider does not exhibit that expected behavior. I have never attempted to program a WMI provider myself, so I don’t want to pass any judgment. I noticed that the MSCluster namespace acts the same way, so it may be something inherent to CIM/WMI that the provider authors have no control over.

In order for a WMI query to work against Hyper-V’s provider, you must be running with administrative privileges. Confusingly, “being a member of the Administrators group” and “running with administrative privileges” are not always the same thing. When working with the Hyper-V provider on the local system, you must always ensure that you run with elevated privileges (Run as administrator) — even if you log on with an administrative account. Remote processes don’t have that problem.

The administrative requirement presents another stumbling block: you cannot create a permanent WMI event watcher for anything in the Hyper-V provider. Permanent WMI registration operates anonymously; the Hyper-V provider requires confirmed administrative privileges. As with everything else, no errors are thrown. Permanent WMI watchers simply do not function.

The takeaway: when you unexpectedly get no output from a Hyper-V-related PowerShell command, you most likely do not have sufficient permissions. Because the behavior bubbles up from the bottom-most layer (CIM/WMI), the problem can manifest in any tool.

The Struggle for Scripters and Application Developers

People sometimes report that my tools don’t work. For example, I’ve been told that my KVP processing stack doesn’t do anything. Of course, the tool works perfectly well — as long as it has the necessary privileges. So, why didn’t I write that, and all of my other scripts, to check their privilege? Because it’s really hard, that’s why.

With a bit of searching, you’ll discover that I could just insert
#requires -RunAsAdministrator at the top of all my scripts. Problem solved, right? Well, no. Sure, it would “fix” the problem when you run the script locally. But, sometimes you’ll run the script remotely. What happens if:

  • … you run the script with an account that has administrative privileges on the target host but not on the local system?
  • … you run the script with an account that has local administrative privileges but only user privileges on the target host?

The answer to both: the desired outcome will not match your expectations.

I would need to write a solution that:

  • Checks to see if you’re running locally (harder than you might think!)
  • Checks that you’re a member of the local administrators
  • If you’re running locally, checks if your process token has administrative privileges

That’s not too tough, right? No, it’s not awful. Unfortunately, that’s not the end of it. What if you’re running locally, but invoke PowerShell Remoting with -ComputerName or Enter-PSSession or Invoke-Command? Then the entire dynamic changes yet again, because you’re not exactly remote but you’re not exactly local, either.

I’ve only attempted to fully solve this problem one time. My advanced VM settings editor includes layers of checks to try to detect all of these conditions. I spent quite a bit of time devising what I hoped would be a foolproof way to ensure that my application would warn you of insufficient privileges. I still get messages telling me that it doesn’t show any virtual machines.

I get better mileage by asking you to run my tools properly.

How to Handle the Hyper-V WMI Provider’s Security

Simply put, always ensure that you are running with the necessary privileges. If you are working locally, open PowerShell with elevated permissions:

psnowork_runas

If running remotely, always ensure that the account that you use has the necessary permissions. If your current local administrator account does not have the necessary permissions on the target system, invoke PowerShell (or whatever tool you’re using) by [Shift]+right-clicking the icon and selecting Run as different user:

psnowork_runasdifferentuser

What About the “Hyper-V Administrators” Group?

Honestly, I do not deal with this group often. I don’t understand why anyone would be a Hyper-V Administrator but not a host administrator. I believe that a Hyper-V host should not perform any other function. Trying to distinguish between the two administrative levels gives off a strong “bad plan” odor.

That said, I’ve seen more than a few reports that membership in Hyper-V Administrators does not work as expected. I have not tested it extensively, but my experiences corroborate those reports.

The Provider Might Not Be Present

All this talk about WMI mostly covers instances when you have little or no output. What happens when you have permissions, yet the system throws completely unexpected errors? Well, many things could cause that. I can’t make this article into a comprehensive troubleshooting guide, unfortunately. However, you can be certain of one thing: you cannot tell Hyper-V to carry out an action if Hyper-V is not running!

Let’s start with an obvious example. I ran Get-VM on a Windows 10 system without Hyper-V:

psnowork_getvmnohv

Nice, clear error, right? 2012 R2/Win 8.1 have a slightly different message.

Things change a bit when using the VHD cmdlets. I don’t have any current screenshots to show you because the behavior changed somewhere along the way… perhaps with Update 1 for Windows Server 2012 R2. Windows Vista/Server 2008 and later include a native driver for mounting and reading/writing VHD files. Windows 8/Server 2012 and later include a native driver for mounting and reading/writing VHDX files. However, only Hyper-V can process any of the VHD cmdlets. Get-VHD, New-VHD, Optimize-VHD, Resize-VHD, and Set-VHD require a functioning installation of Hyper-V. Just installing the Hyper-V PowerShell module won’t do it.

Currently, all of these cmdlets will show the same or a similar message to the one above. However, older versions of the cmdlets give a very cryptic message that you can’t do much with.

How to Handle a Missing Provider

This seems straightforward enough: only run cmdlets from Hyper-V module against a system with a functioning installation of Hyper-V. You can determine which functions it owns with:

When running them from a system that doesn’t have Hyper-V installed, use the ComputerName parameter.

Further Troubleshooting

With this article, I wanted to knock out two very simple reasons that Hyper-V PowerShell cmdlets (and some other tools) might not work. Of course, I realize that any given cmdlet might error for a wide variety of reasons. I am currently only addressing issues that block all Hyper-V cmdlets from running.

For troubleshooting a failure of a specific cmdlet, make sure to pay careful attention to the error message. They’re not always perfect, but they do usually point you toward a solution. Sometimes they display explicit text messages. Sometimes they include the hexadecimal error code. If they’re not clear enough to understand immediately, you can use these things in Internet searches to guide you toward an answer. You must read the error, though. Far too many times, I see “administrators” go to a forum and explain what they tried to do, but then end with, “I got an error” or “it didn’t work”. If the error message had no value the authors wouldn’t have bothered to write it. Use it.

Have any questions or feedback?

Leave a comment below!


Upgrading Hyper-V 2012 R2 to Hyper-V 2016

07 Sep 2017 by Eric Siron
   
0    
Hyper-V Articles

Ready to make the jump from Hyper-V 2012 R2 to 2016? With each successive iteration of Hyper-V, the move gets easier. You have multiple ways to make the move. If you’re on the fence about upgrading, some of the techniques involve a bit less permanence.

What This Article Will Not Cover

I’m not going to show you how to install Hyper-V. The process has not changed since 2012. We probably owe the community a brief article on installing though…

I will not teach you how to use Hyper-V or its features. You need to know:

  • How to install Hyper-V
  • How to install and access Hyper-V’s native tools: Hyper-V Manager, PowerShell, and, where applicable, Failover Cluster Manager
  • How to use Hyper-V Replica, if you will be taking any of the HVR options
  • How to use Live Migration

I won’t make any special distinctions between Hyper-V Server and Windows Server with Hyper-V.

I will not show anything about workgroup configurations. Stop making excuses and join the domain.

I’m not going to talk about Windows 10, except in passing. I’m not going to talk about versions prior to 2012 R2. I don’t know if you can skip over 2012 R2.

What This Article Will Cover

What we will talk about:

  • Virtual Machine Configuration File Versions
  • Rolling cluster upgrades: I won’t spend much time on that because we already have an article
  • Cross-version Live Migration
  • Hyper-V Replica
  • Export/import
  • In-place host upgrades

Virtual Machine Configuration File Versions

Each new iteration of Hyper-V brings a new format for the virtual machine definition file. It also brings challenges when you’re running different versions of Hyper-V. Historically, Hyper-V really only wants to run virtual machines that use its preferred definition version. If it took in an older VM, it would want to upconvert it. 2016 changes that pattern a little bit. It will happily run version 5.0 VMs (2012 R2) without any conversion at all. That means that you can freely move a version 5.0 virtual machine between a system running 2012 R2 Hyper-V and a system running 2016. The Windows 10/Windows Server 2016 version of Hyper-V Manager includes a column so that you can see the version:

m16_cv

The version has been included in the Msvm_VirtualSystemSettingData WMI class for some time and exposed as a property in Get-VM. However, the Get-VM cmdlet in version 2 of the Hyper-V module (ships with W10/WS2016/HV2016) now includes the version in the default view:

upgrading hyper-v 2012 r2 to 2016 - version 5.0

The capability of 2016 to directly operate the older version enables all of the features that we’ll talk about in this article.

Rolling Cluster Upgrades

2016 gives an all-new upgrade option. “Rolling cluster upgrade” allows you to upgrade individual cluster nodes to 2016. At least, we describe it that way. More accurately, clusters of Hyper-V hosts can contain both 2012 R2 and 2016 simultaneously. So, “upgrading” may not be the correct term to use for individual nodes. You can upgrade them, of course, but you can also wipe them out and start over or replace them with all-new hardware. Whatever you’re doing, the process boils down to: take down a 2012 R2 node, insert a 2016 node.

A feature called “cluster functional level” enables this mixing of versions. When the first 2016 node joins the cluster, it becomes a “mixed mode” cluster running at a “functional level” of 2012 R2. Once the final 2012 R2 node has been removed, you just run Update-ClusterFunctionalLevel. Then, at your convenience, you can upgrade the configuration version of the virtual machines.

Adrian Costea wrote a fuller article on rolling cluster upgrades.

Cross-Version Live Migration

Due to the versioning feature that we opened the article with, Live Migration can freely move a version 5.0 virtual machine between a 2012 R2 system and a 2016 system. If both of the hosts belong to the system cluster (see the previous section), then you don’t need to do anything else. Contrary to some myths being passed around, you do not need to configure anything special for intra-cluster Live Migrations to work.

To Live Migrate between hosts that do not belong to the same cluster, you need to configure constrained delegation. That has not changed from 2012 R2. However, one thing has changed: you don’t want to restrict delegation to Kerberos on 2016 systems anymore. Instead, open it up to any protocol. I provided a PowerShell script to do the work for you. If you’d rather slog through the GUI, that same article shows a screenshot of where you’d do it.

Special note on constrained delegation configuration between 2012 R2 and 2016: Constrained Delegation’s behavior can be… odd. It gets stranger when combing 2012 R2 with 2016. On a 2016’s systems property sheet, always select “Use any authentication protocol”. On a 2012 R2 system, always select “Use Kerberos only”. I found that I was able to migrate from 2016 to 2012 R2 without setting any delegation at all, which I find… odd. When moving from 2012 R2, I found that I always had to start the migration from the 2016 side. Nothing I did ever allowed for a successful move when I initiated it from the 2012 R2 side. I expect that your mileage will vary. If you get errors, just try a different combination. I promise you, this migration path does work.

Cross-Version Hyper-V Replica

If you’re reading straight through, you’ll find that this section repeats much of what you’ve already seen.

Hyper-V Replica will happily move virtual machines using configuration version 5.0 between 2012 R2 and 2016 systems. The fundamental configuration steps do not change between the two versions.

Export and Import

The export feature has changed a great deal since its initial inception. Once upon a time, it would create an .exp file in place of the XML file. Without that .exp file, Hyper-V would not be able to import an exported virtual machine. That limitation disappeared with 2012. Since then, Hyper-V can import a virtual machine directly from its XML file. You don’t even need to export it anymore. If you wanted, you could just copy the folder structure over to a new host.

However, the export feature remains. It does two things that a regular file copy cannot:

  • Consolidation of virtual machine components. If you’ve ever looked at the settings for a virtual machine, you’d know that you can scatter its components just about anywhere. The export feature places all of a virtual machine’s files and attached VHD/Xs into a unified folder structure.
  • Active state preservation. You can export a running virtual machine. It will be imported where it left off.

When you export a virtual machine, it retains its configuration version. The import process on 2016 does not upgrade version 5.0 virtual machines. They will remain at version 5.0 until you deliberately upgrade them. Therefore, just as with Live Migration and Replica, you can use export/import to move version 5.0 virtual machines between 2012 R2 and 2016.

In-Place Host Upgrades

Windows has earned a reputation for coping poorly with operating system upgrades. Therefore, a lot of people won’t even try it anymore. I can’t say that I blame them. However, a lot of people haven’t noticed that the upgrade process has changed dramatically. Once upon a time, there was a great deal of backing up and in-place overwrites. The Windows upgrade process no longer does any of that. It renames the Windows folder to Windows.old and creates an all-new Windows folder from the install image. But, the matter of merging in the old settings remains. Most problems source from that.

I have not personally attempted an upgrade of Windows Server for many years now. I do not exactly know what would happen if you simply upgraded a 2012 R2 system directly to 2016. On paper, it should work just fine. In principle…

If you choose the direct upgrade route, I would:

  • Get a good backup and manually verify it.
  • Schedule enough time to allow for the entire thing to finish, go horribly wrong, and rebuild from scratch
  • Make a regular file copy of all of the VMs to some alternative location

Wipe and Reinstall

If you want to split the difference a bit, you could opt to wipe out Windows/Hyper-V Server without hurting your virtual machines. Doing so allows you to make a clean install on the same hardware. Just make certain that they’re not in the same location that you’re wiping out. You can do that with a regular file copy or just by holding them on a separate partition from the management operating system. Once the upgrade has completed, import the virtual machines. If you’re going to run them from the same location, use the Register option.

Leveraging Cross-Version Virtual Machine Migration Options

All of these options grant you a sort of “try before you commit” capability. In-place upgrades fit that category the least; going back will require some sacrifice. However, the other options allow you to move freely between the two versions.

Some people have reported encountering performance issues on 2016 that they did not have with 2012 R2. To date, I have not seen any reason to believe that 2016 possesses any inherent flaws. I haven’t personally involved myself with any of these systems, so I can only speculate. So far, these reports seem isolated, which would indicate situational rather than endemic problems. Hardware or drivers that aren’t truly ready for 2016 might cause problems like these. If you have any concerns at all, wouldn’t you like the ability to quickly revert to a 2012 R2 environment? Wouldn’t you also like to be able to migrate to 2016 at your leisure?

Cross-Version Virtual Machine Limitations

Unfortunately, this flexibility does not come without cost. Or, to put a more positive spin on it, upgrading the configuration version brings benefits. Different version levels bring their own features. I didn’t track down a map of versions to features. If you upgrade from 5.0 to the current version (8.0 as of this writing), then you will enable all of the following:

  • Hot-Add and Hot-Remove of memory and network adapters
  • Production Checkpoints/Disable Checkpoints
  • Key Storage Drive (Gen 1)
  • Shielded VM (Gen 2)
  • Virtual Trust Platform Module (vTPM) (Gen 2)
  • Linux Secure Boot
  • PowerShell Direct

When you’re ready to permanently make the leap to 2016, you can upgrade a virtual machine with Update-VMVersion. You’ll also find on the VM’s right-click menu:

m16_upvmver

For either method to be successful, the virtual machine must be turned off.

Have any questions or feedback?

Leave a comment below!


How to Deploy Hyper-V Guests with Windows Deployment Services

31 Aug 2017 by Eric Siron
   
0    
Hyper-V Tutorials

Most Hyper-V literature focuses on installation and maintenance of the hypervisor. Most of the rest of it focuses on maintenance and operation of virtual machines. We don’t talk very much about deploying guests. When we do, we tend to focus on all of the switches and options that pertain to the virtual machine object. We pay almost no attention to the guest operating system. This leads to many unanswered questions, primarily: “What’s the best way to deploy Hyper-V virtual machines.” Unfortunately, there isn’t a “best” way. Windows Deployment Services (WDS) presents itself as a strong contender.

This guide is part of a 2-part series. The other guide focuses on How to Deploy Hyper-V Hosts.

Applicability of this Article

I mentioned in an earlier article that WDS has many uses far beyond Hyper-V, and that’s still true. In this article, I will show you how to take the generic Windows Server install ISO that you download from Microsoft and use it to deploy Hyper-V virtual machines. If you trim away the parts about Hyper-V, you can deploy to physical machines just as easily. To aid you in that quest, I will also show how to set up driver packages.

This article does not show how to use an already-deployed system as a base image. You may want to use that technique if you’ve got particular software that you want in the image, including driver packages that can only be deployed by .exe or .msi.

Prerequisites for Windows Deployment Services

I find WDS easy to set up and configure, although it may not look that way at first glance. You’ll need a few things to get started.

  • A Windows Server GUI system. I don’t believe that WDS works on Server Core at all. You can manage it remotely using RSAT (Remote Server Administration Tools) but only from Server SKUs. Windows 10 RSAT does not contain the WDS tools.
  • An Active Directory domain that contains the WDS system and the Hyper-V host(s). WDS can work without a domain, but people who choose workgroup mode prefer doing things the hard way. I don’t want to ruin anything for them by providing a how-to.
  • Enough local storage on an NTFS volume to hold at least one installation image. The system running WDS must be able to access the storage location through a drive letter. WDS is not cluster-aware, so I recommend that you avoid CSVs.
  • A DHCP system. I only use Windows-based but it would be possible to use others.
  • Install images for Windows Server and/or Hyper-V Server. You can start with physical media or downloaded ISOs.
  • Every system that will receive an image from a WDS server must be PXE-capable. Your network infrastructure must allow PXE booting.

Installing Windows Deployment Services

Windows Deployment Services ships as an innate role of Windows Server. I will be demonstrating on WS2016. All currently-supported versions provide it and you follow nearly the same process on each of them.

  1. Start in Server Manager. Use the Add roles and features link on the main page (Dashboard) or on the Manage drop-down.
  2. Click Next on the introductory page.
  3. Choose Role-based or feature-based installation.
    arf_rfinstallationmode
  4. On the assumption that you’re running locally, you’ll only have a single server to choose from. If you’ve added others, choose accordingly.
    arf_targetserver
  5. Check Windows Deployment Services.
    wdt_roleselection
  6. Immediately upon selecting Windows Deployment Services, you’ll be asked if you’d like to include the management tools. Unless you will always manage from another server, leave the box checked and click Add Features.
    wdt_managementfeatures
  7. Click Next on the Select server roles page and then click Next on the Select server features page (unless you wish to pick other things; no others are needed for this walkthrough).
  8. You’ll receive another informational screen explaining that WDS requires further configuration for successful operation. Read through for your own edification. You can use the mentioned command line tools if you like, but that won’t be necessary.
  9. You will be asked to select the components to install. Leave both Deployment Server and Transport Server checked.
    wdt_componentselection
  10. Click Install on the final screen and wait for the installation to finish.

You’ve finished the installation portion.

Initial WDS Configuration

Under Administrative Tools on the WDS host (now further named “Windows Administrative Tools” on the Start menu in Server 2016), you’ll now find Windows Deployment Services. Open that up to begin initial WDS configuration.

wdt_initialscreen

  1. Right-click on the newly installed server (it will have a yellow triangle overlay icon) and click Configure Server.
    wdt_configserverlink
  2. The initial wizard screen discusses the prerequisites. It mentions a DNS server, which I neglected to mention. That’s because I only work with Active Directory systems, and you can’t AD without DNS.
  3. On the next screen, choose Integrated with Active Directory.
    wdt_ad
  4. Choose a location to store the images that this server will deploy. The location that you specify must be accessible via a drive letter that’s local to the WDS system. The wizard will automatically create a REMINST share on the specified folder to make it available to deploying systems.
    wdt_location
  5. Choose the client response action. If you’re not sure what to pick, don’t worry. You can easily change this option later.
    1. Do not respond to any client computers. You could set this option to give yourself more time to configure the server.
    2. Respond only to known client computers. This option requires the most hands-on management. You must determine the hardware ID of computers that this host will respond to and configure them in Active Directory. I’ll show you how to do that in a moment. If you use Active Directory-Based Activation or KMS and you can’t isolate your PXE network, this setting ensures that your licenses aren’t given to unauthorized systems.
    3. Respond to all client computers (known and unknown), with optional Require administrator approval for unknown computers. This option can be dangerous but reduces administrative effort. Even if you aren’t using automatic key deployments and/or can limit the scope of PXE booting to secured networks, I still recommend that you set the Require administrator approval option. If WDS deploys to a computer without a pre-staged account, it gets a generic name and gets placed in the domain’s default computer OU. That requires even more busy work to clean up than the other choices.
      wdt_clientresponse
  6. WDS will perform configuration.
    wdt_configprocess
  7. The final screen allows you to jump into image selection. I avoid this option because it’s not the same process that you’ll use with an active WDS server. If you choose to do so, then you will need to provide the path to a location that contains an operating system’s boot.wim and install.wim.
    wdt_finalconfigpage

Now your WDS system can listen for and respond to PXE requests, but doesn’t know about any clients and doesn’t have anything to deploy.

Creating WDS Boot Images

When a system starts up and PXE directs it to the WDS server, it first receives a boot image. The boot image should match the operating system it will deploy. You can obtain one easily.

  1. Find the DVD or ISO for the operating system that you want to install. Look in its Sources folder for a file named boot.wim. That’s what you want.
    wds-bootwim
  2. On your WDS server, right-click the Boot Images node and click Add Boot Image.
    wds_addbootimage
  3. On the first page of the wizard, browse to the image file. You can load it right off the DVD as it will be copied to the local storage that you picked when you configured WDS.
    wds_browseboot
  4. You’re given an opportunity to change the boot image’s name and description. I would take that opportunity, because the default Microsoft Windows Setup (x##) won’t tell you much when you have multiples.
    wds-bootimagename
  5. You will then be presented with a confirmation screen. Clicking Next starts the file copy to the local source directory. After that completes, just click Finish.

Your boot image will then appear in the list in the right pane.

wds_bootimagelist

Adding Install Images to a WDS Server

After the boot image, a newly-deploying system will require an install image. Install images contain the Windows/Windows Server deployment, plus anything that you’ve added. Adding install images is a very simple process.

  1. First, you will need a WIM file that contains the operating system to deploy. Remember the boot.wim file that you found in step 1 of the boot image section? Go back there and locate install.wim.
    wds_installwimorig
  2. Second, you need an install group, if you don’t already have one. WDS will deduplicate the images inside an install group, so organize them by operating system. You can create the install image groups separately, or while you are importing an install image. To create one separately, right-click Install Images and click Add Image Group. You’ll get a small dialog where you can enter the name for your group.
  3. To add an image, right-click Install Images and click Add Install Image.
    wds_addinstallimage
  4. Choose or create the image group to add the image to.
    wds_imagegroup12r2
  5. Locate the install image file that you found in step 1.
    wds_selectoriginstallwim
  6. The WIMs that come in Microsoft’s deployment images typically contain multiple images. In this screenshot, you are looking at two editions of Windows Server 2012 R2 (Standard and Datacenter), each in two install modes (full and Core). Move the column header sliders around to see the names and descriptions more clearly. You can remove any editions that you don’t want to new systems to use; for instance, uncheck the Datacenter editions if you aren’t licensed. If you check the box at the very bottom of the dialog, you can rename those. If you don’t, then they’ll be what you have to choose from when you are installing a new system via WDS.
    wds_wimindexes12r2
  7. If you didn’t leave that box checked, you’re now looking at a screen that asks you to enter a name and description. I didn’t do that, so I have no screen shot. Every image that was checked to be included in step 6 will have its own dialog page.
  8. After that, just Next and Finish. The file will be imported. It will take some time as it prepares/calculates deduplication info.

That’s all it takes to add images. Now, when systems boot, they’ll have each of these items as possible install images… maybe. You can override that.

Limiting the Users that Can Install a WDS Image

During the deployment process, WDS prompts for user credentials before showing available images. You can restrict image access by user account and/or group membership.

  1. In WDS, open the Install Images section to install group that you want to work with. You can set permissions on the group or any one of the images in the group.
    1. To set permissions on the group, right-click the group and click Security in the menu.
    2. To set permissions on a specific image, left click the image group in the tree to show the images in the right pane. Right-click the desired image and click Properties. Switch to the User Permissions tab.
  2. You’ll now be viewing a standard Windows ACL dialog. In order to apply an image, an account needs Read and Execute permissions. By default, Authenticated Users have Modify permissions and any account that resolves to a member of the local Users group has Read and Execute permissions. You’ll need to break inheritance to remove those before you can set more granular permissions. Remember to leave SYSTEM and Administrators with Full Control or you’ll wind up with problems.

In the article on using WDS for specific systems, I talked about locking an install image to specific machines by using filters. You can apply those alongside, or instead of, user permissions.

Linking Systems to Active Directory Accounts in WDS

This section is technically optional. If you follow this part, new systems will deploy with a computer name that you select and into the Active Directory OU that you select. If you skip this part, computers will deploy with a generic name into the default OU (usually Computers). I always use these steps to pre-stage my computer accounts and then leave them linked.

    1. Gather the system’s BIOS UUID. This article focuses on new Hyper-V guests, so I’d guide you to the fourth option below. However, I’m including the other possibilities for completeness.
      1. If it’s a Windows system and it’s already deployed, you can collect its UUID from an elevated PowerShell prompt with:
        gwmi Win32_ComputerSystemProduct | select UUID.
      2. If it’s a Linux system and it’s already deployed… well… you can’t push Linux with WDS so this article is probably not very helpful for you. But you can still get its BIOS UUID by installing the dmidecode package from your distribution’s repository and running:
        sudo dmidecode s systemuuid.
      3. If the system is physical and not yet deployed, you can set it to boot to PXE and start it up. When you see something like the following, record the GUID:
        PXE GUID
        PXE GUID
      4. If the system is a Hyper-V guest and not yet deployed, you can extract its GUID using WMI. That’s non-trivial enough that I wrote an entire script to not only extract the UUID, but to update an Active Directory computer account with it. If the Hyper-V VM in question was copied from another, then it likely has an identical UUID which will cause problems. I have a PowerShell script and an EXE utility that will allow you to modify the UUID.
      5. If the system is a guest on another hypervisor, search for a technique to determine the UUID. You can usually try the PXE method with a VM as well.
    2. Decide on a name for the system, if you don’t already have one in mind/in production. If it doesn’t already exist, create that computer account in Active Directory in the desired OU.
    3. On the computer account’s properties sheet in Active Directory Users and Computers, switch to the Attribute Editor tab.
    4. Double-click the netbootGUID item. Leave the drop-down on Hexadecimal. Paste/type in the UUID without spaces or dashes.
      wds_netbootguidattred_new
    5. If you look at it in the list, it will have reversed a bunch of the characters. I don’t know why it does this, but that’s how WDS works:
      wds_netbootinlist
    6. In WDS, click on Active Directory Prestaged Devices. You should now see your new/updated device in the list. Double-click it to check the UUID. If it’s incorrect, just paste over the correct UUID. You don’t need to worry about braces or dashes; it will figure it out. You can’t use spaces, though. Remember that when you open it, WDS will have reversed several of the characters. Don’t try to make it stop. If you succeed, it won’t match when it boots.
      wds_fixeduuid
    7. Switch to the Boot tab. Set the PXE mode to whatever suits you; just make sure that you pick something that allows you to start in PXE mode or you’ll have trouble following the rest of this article. Select a boot image or it won’t work at all.
      wds_pxemodeandbootwim

WDS does include an Add Device “wizard”. It’s not very smart, though. You need to know the distinguished name (DN) of the target OU. That’s not tough. But, you can only use it to add devices. You can’t work with systems that exist in the directory but don’t have their netbootGUID property set. If you install ADUC on the WDS system, then it injects a new dialog into the new computer account wizard. That dialog allows you to set the GUID more easily but doesn’t let you set the boot.wim. The way that I showed you is a bit clumsier, but it’s easier overall and universally applicable.

Setting a Hyper-V Virtual Machine to PXE Boot

Your WDS infrastructure can now accept inbound requests. Now you move on to the guests. WDS operates by responding to PXE requests (pre-boot execution environment) that arrive over the network. So, we need to set our virtual machines to boot from the network.

  1. Create the Hyper-V virtual machine using the technique of your choice. Make sure that you do create and attach a new virtual disk to hold the operating system. You can use Generation 1 or Generation 2. Generation 2 will probably deploy more quickly because it can use the faster synthetic virtual network adapter, but deployment speed depends on more than the network. If you use the wizard in Hyper-V Manager or Failover Cluster Manager, one of the installation options is Install an operating system from a network-based installation server. If you choose that option, you’ll be able to skip down to step 4. The following screenshot was taken from a Generation 1 VM. Generation 2 VMs use the same wording, but the screen looks a bit different because there’s no floppy disk choice.
    wds_gen1installfromnet
  2. Generation 1 only: Attach a legacy (emulated) network adapter. The virtual machine must be turned off.
    1. PowerShell:
      Add-VMNetworkAdapter -VMName svwdsg1-12r2 -SwitchName vSwitch -IsLegacy $true
    2. In the GUI, use the Add Hardware tab:
      wds_addlegacyadapter
  3. Change the boot order so that the network adapter is at the highest position. As much as I like PowerShell, you’ll have an easier time doing this through the GUI. For Generation 1, use the BIOS tab. For Generation 2, use the Firmware tab. Highlight the adapter and use Move Up until it’s at the top.
    wds_bootorder
  4. Optional: follow the instructions under the section titled “Linking Systems to Active Directory Accounts in WDS” above to pre-stage the virtual machine’s OS account in Active Directory. If you do not, then the guest operating system will be given a default generic name and placed in the directory’s default OU.
  5. Open the VM’s console (Connect on its right-click menu). For Generation 2 VMs on 2016 or later, do not use the Shielded VM feature until after installation. The Shielded VM feature blocks console access.
  6. Start the virtual machine. Watch the console carefully, as you may need to press a key in order to accept the network boot option and start the installation:
    wds_presskey
  7. Choose the operating system boot image to use.
    wds_choosebootimage
  8. The selected boot image will be transferred to your system.
    wds_boottransfer
  9. Setup will start by asking you which language you wish to use.
    wds_setup1
  10. Next, you will be asked for credentials.
    wds_credentials
  11. Next, select the operating system to install. Take care to use one that aligns with the boot image that you chose; I’m not sure what happens if they don’t match.
    wds_installimageselectiongeneric
  12. From this point onward, installation works the same as it would from DVD/ISO/USB media. I trust you have enough experience with that to not need my help.
  13. Once installation completes, you may want to reconfigure your virtual machine to not boot from the network first. If you’re using a Generation 1 VM, you might want to shut it down and remove that legacy network adapter altogether.

Add Driver Packages to a WDS Server

This section does not align well with the total stated purpose of this article. A Hyper-V virtual machine has no use for additional driver packages. However, this article does present the usage of generic images straight from install media. For physical systems, such images pair very well with driver packages in WDS.

You will encounter one problem, though. You can only use driver packages that provide the basic inf, cat, and sys files (some driver packages will have some additional files). WDS cannot use drivers that only deploy from EXE or MSI.

To install a driver package:

  1. Right-click Drivers and click Add Driver Package.
  2. Select the location and whether or not to load all drivers from that location (and sub-folders) or to open up a specific inf. I’m only going to walk through the single INF route. The primary difference is that we’re picking one driver instead of many.
    wds_infselection
  3. Assuming that the wizard detects that you have selected a valid driver inf file, you will be asked to verify it. If you chose the option to load all driver packages, then you might have a longer list.
    wds_infverify
  4. The next two screens verify and install the files that you selected.
  5. Next you will be asked to select a driver group. I will use a subsection to introduce new driver groups. If you create one now, it will be quicker, but you’ll need to set up options later.
    wds_groupselectionforpackage
  6. The final screen includes a checkbox for you to modify the driver group filters if you chose to create one. The filters work much like those for install images.

Add Driver Groups

You need at least one Driver Group for your drivers. Like boot and install images, driver groups will be available to all systems unless you create filters specifying otherwise. Use Driver Group filters to restrict sets of drivers to particular hardware. It’s not always a good idea to restrict drivers like that, so think carefully about the consequences of setting up group filters.

  1. Right-click Drivers and click Add Driver Group.
  2. Name the driver group.
    wds_drivergroupname
  3. Set hardware filters, if desired. These will match by WMI. The fields shown in this screenshot were obtained from
    gwmi win32_computersystem.
    wds_hardwarefilters
  4. Set operating system filters, if necessary. I used
    gwmi win32_operatingsystem to get the operating system version, but I had to had a 0 (zero) for the service pack version before it would accept it. This can be useful if you have driver pacs that might work for 2012 R2 and 2016, but want to ensure that you control exactly which get installed.
    wds_drivergrouposfilter
  5. Select the action to follow — should it install every driver in the package regardless of detected hardware, or should it only install drivers when the system has that hardware? This can be useful if the hardware makeup of a deployed system might change later.
    wds_drivergroupinstallaction

Add Driver Packages to a WDS Boot Image

By default, WDS driver packages are available during the install phase. If you need them available during the boot phase (for example, RAID drivers), then you can add them to the boot image. Be aware that adding a driver package to a boot image is permanent. I recommend that you not even try this until you have verified that WDS cannot install to a specific system due to a driver problem. If you have done that, then I highly recommend that you either make a copy of the boot image and import it back in or just reimport the base install.wim from media as a separate line item. This will protect your existing boot image in the event that something goes awry. If a boot image contains a corrupt driver, the image might become completely unusable for anyone.

To add a driver package to a boot image:

  1. Right-click the boot image and click Add Driver Packages to Image.
  2. Click Next through the warning (it’s essentially what I just told you).
  3. By default, the wizard pre-selects some filters to ensure you get a solid match for driver types. If you want, modify the filters that it chose. Once you’re happy with them, click Search for Packages. The lower part of the screen will then populate with driver packages that match that criteria.
    wds_bootdriverselection
  4. Continue through the wizard to add the driver(s).

Enjoy Your WDS Installation!

You now have all of the basic information that you need to get positive results from WDS. You have more to discover if this didn’t quite seem like enough. Whether you do more or stick with what I’ve shown you, you now have a much more robust deployment system than you could possibly get from media installs alone. As a bonus, you won’t need to dig for media anymore.

This guide is part of a 2-part series. The other guide focuses on How to Deploy Hyper-V Hosts using WDS.

Have any questions or feedback?

Leave a comment below!


How to Deploy Hyper-V Hosts with Windows Deployment Services

29 Aug 2017 by Eric Siron
   
0    
Hyper-V Tutorials

You can find any number of ways to deploy physical systems. Once upon a time, I enjoyed the process of starting with install media and going through all the motions to get a completely fresh environment. Who doesn’t like that new operating system smell? But, I haven’t got that kind of patience anymore. Even if you do, sometimes you need to need to deploy too many systems to use any media method. Sometimes you just get bored of it and want to get it over with. Out of all the techniques that I’ve tried, I always return to Windows Deployment Services (WDS). Some of the other options appear to offer more (like VMM), but they invariably promise more than they deliver. WDS produces consistent, repeatable, reliable results.

Applicability of this Article

WDS applies more or less universally to all Windows Server deployments. If a book doesn’t already exist on WDS, a very long one could be written. This article makes no attempt to cover the entire topic. Later on the page, you’ll find a “Choosing a WDS Deployment Style” heading that goes into some detail on the options that are available to you. A quick intro of my intent for this article:

  • How to use a single physical system to build a source image that can then be redeployed to other hardware of the same type.
  • How to capture an image of a single physical system to build a source image that can really only be redeployed back to the same system. You can also use this technique to convert a system between BIOS and UEFI mode — which also means switching a Hyper-V guest from Generation 1 to Generation 2 or vice versa.

This article will not cover how to create a completely generic image that can be deployed to any system. I’ll tackle that topic in an upcoming article on using WDS for Hyper-V guests.

Even though I’m tying all of this to Hyper-V, that’s certainly not the limit of WDS or any of these techniques. I focus on Hyper-V because I believe that most Hyper-V deployments are done the hard way and then needlessly protected by backup. WDS makes the whole thing easier. You can deploy a new host in a few clicks. If you lose a host, you can redeploy it from WDS and restore guests from backup more quickly than you can muck through a bare metal restore or a new install from media.

Warning: If you place your virtual machines on the C: drive, I would avoid using this method to capture an already-deployed Hyper-V host. The entire contents of the C: drive are included.

Prerequisites for Windows Deployment Services

I find WDS easy to set up and configure, although it may not look that way at first glance. You’ll need a few things to get started.

  • A Windows Server GUI system. I don’t believe that WDS works on Server Core at all. You can manage it remotely using RSAT (Remote Server Administration Tools) but only from Server SKUs. Windows 10 RSAT does not contain the WDS tools.
  • An Active Directory domain that contains the WDS system and the Hyper-V host(s). WDS can work without a domain, but people who choose workgroup mode prefer doing things the hard way. I don’t want to ruin anything for them by providing a how-to.
  • Enough local storage on an NTFS volume to hold at least one installation image. The system running WDS must be able to access the storage location through a drive letter. WDS is not cluster-aware, so I recommend that you avoid CSVs.
  • A DHCP system. I only use Windows-based but it would be possible to use others.
  • Install images for Windows Server and/or Hyper-V Server. You can start with physical media or downloaded ISOs.
  • Every system that will receive an image from a WDS server must be PXE-capable. Your network infrastructure must allow PXE booting.

Installing Windows Deployment Services

Windows Deployment Services ships as an innate role of Windows Server. I will be demonstrating on WS2016. All currently-supported versions provide it and you follow nearly the same process on each of them.

  1. Start in Server Manager. Use the Add roles and features link on the main page (Dashboard) or on the Manage drop-down.
  2. Click Next on the introductory page.
  3. Choose Role-based or feature-based installation.
    arf_rfinstallationmode
  4. On the assumption that you’re running locally, you’ll only have a single server to choose from. If you’ve added others, choose accordingly.
    arf_targetserver
  5. Check Windows Deployment Services.
    wdt_roleselection
  6. Immediately upon selecting Windows Deployment Services, you’ll be asked if you’d like to include the management tools. Unless you will always manage from another server, leave the box checked and click Add Features.
    wdt_managementfeatures
  7. Click Next on the Select server roles page and then click Next on the Select server features page (unless you wish to pick other things; no others are needed for this walkthrough).
  8. You’ll receive another informational screen explaining that WDS requires further configuration for successful operation. Read through for your own edification. You can use the mentioned command line tools if you like, but that won’t be necessary.
  9. You will be asked to select the components to install. Leave both Deployment Server and Transport Server checked.
    wdt_componentselection
  10. Click Install on the final screen and wait for the installation to finish.

You’ve finished the installation portion.

Initial WDS Configuration

Under Administrative Tools on the WDS host (now further named “Windows Administrative Tools” on the Start menu in Server 2016), you’ll now find Windows Deployment Services. Open that up to begin initial WDS configuration.

wdt_initialscreen

  1. Right-click on the newly installed server (it will have a yellow triangle overlay icon) and click Configure Server.
    wdt_configserverlink
  2. The initial wizard screen discusses the prerequisites. It mentions a DNS server, which I neglected to mention. That’s because I only work with Active Directory systems, and you can’t AD without DNS.
  3. On the next screen, choose Integrated with Active Directory.
    wdt_ad
  4. Choose a location to store the images that this server will deploy. The location that you specify must be accessible via a drive letter that’s local to the WDS system. The wizard will automatically create a REMINST share on the specified folder to make it available to deploying systems.
    wdt_location
  5. Choose the client response action. If you’re not sure what to pick, don’t worry. You can easily change this option later.
    1. Do not respond to any client computers. You could set this option to give yourself more time to configure the server.
    2. Respond only to known client computers. This option requires the most hands-on management. You must determine the hardware ID of computers that this host will respond to and configure them in Active Directory. I’ll show you how to do that in a moment. If you use Active Directory-Based Activation or KMS and you can’t isolate your PXE network, this setting ensures that your licenses aren’t given to unauthorized systems.
    3. Respond to all client computers (known and unknown), with optional Require administrator approval for unknown computers. This option can be dangerous but reduces administrative effort. Even if you aren’t using automatic key deployments and/or can limit the scope of PXE booting to secured networks, I still recommend that you set the Require administrator approval option. If WDS deploys to a computer without a pre-staged account, it gets a generic name and gets placed in the domain’s default computer OU. That requires even more busy work to clean up than the other choices.
      wdt_clientresponse
  6. WDS will perform configuration.
    wdt_configprocess
  7. The final screen allows you to jump into image selection. I avoid this option because it’s not the same process that you’ll use with an active WDS server. If you choose to do so, then you will need to provide the path to a location that contains an operating system’s boot.wim and install.wim.
    wdt_finalconfigpage

Now your WDS system can listen for and respond to PXE requests, but doesn’t know about any clients and doesn’t have anything to deploy.

Choosing a WDS Deployment Style

WDS applies to far more than Hyper-V, and I can’t possibly cover all possible options. I’m going to narrow the operational scope down to two operating systems: Hyper-V Server (the thing that everyone calls “Hyper-V Core”) and Windows Server with the Hyper-V role. I’m not going to make any distinction here between Core or GUI mode for Windows Server. My demonstrations will be for Windows Server with Hyper-V with a full GUI. You’ll follow essentially the same directions no matter which of these you choose. You can also go back later and add in any of the others if you want to have more options.

You have a larger decision to make, though. Do you want to use completely generic images or do you want to retain a customized image for your hyper-v hosts?

The Case for Generic Images

Most WDS installations that I’ve encountered make use of generic images. Essentially, you simply provide unmodified WIM files directly from Microsoft’s original deployment media. This approach provides multiple benefits:

  • It’s what everyone does. I’m generally resistant to the bandwagon effect, but anyone that has any experience with WDS will instantly understand your environment if you stick to generic images. If you get stuck, other people will have an easier time helping you if you use generic images.
  • WDS allows you to build out separate driver package groups to match up with physical systems. Driver variances are a big reason that people resist centralized deployment systems. WDS solves most of that problem while still allowing you to use generic images.
  • Maintenance is simpler.

The Case for Specific Images

I use machine-specific images in my lab quite a bit. I’m on the fence about doing anything like this in production, although it certainly has some appeal.

  • It works like a checkpointing system for a physical box. You can break the machine and then revert it to almost exactly the same previous state. It’s not very fast, but no worse than any other bare metal technique that I’ve used.
  • For systems that don’t change, it’s a semi-backup.
  • You can convert a system from BIOS mode to UEFI mode and from UEFI mode to BIOS mode. Incidentally, it also means that you can freely convert from a Hyper-V Generation 1 VM to a Generation 2 VM and back again.

Mixing it Up

To a degree, you can have your cake and eat it, too. WDS doesn’t care how many separate images it stores. You can build all the different deployments that you want and prop them up to your heart’s content. As long as you can keep them all straight, then everything will be fine.

You can also build a reference system and then store it as a generic image. I often use this approach to get around some limitations. WDS relies on WIM images, which can accept some types of drivers, applications, and updates, but not all. If I need a package that can only be installed to a live system, then I start with a live system. Keeping those applications updated can be challenging, but it’s usually better than rebuilding a server from media.

Creating WDS Boot Images

You create WDS install images differently depending on the desired outcome. However, all methods depend on one single root: you require a boot image. Boot images are small start-up stubs that take the hand-off from PXE to start up a computer. In the case of WDS, a boot image should match the operating system it will deploy. You can obtain one easily.

  1. Just find the DVD or ISO for the operating system that you want to install. Look in its Sources folder for a file named boot.wim. That’s what you want.
    wds-bootwim
  2. On your WDS server, right-click the Boot Images node and click Add Boot Image.
    wds_addbootimage
  3. On the first page of the wizard, browse to the image file. You can load it right off the DVD as it will be copied to the local storage that you picked when you configured WDS.
    wds_browseboot
  4. You’re given an opportunity to change the boot image’s name and description. I would take that opportunity because the default Microsoft Windows Setup (x##) won’t tell you much when you have multiples.
    wds-bootimagename
  5. You will then be presented with a confirmation screen. Clicking Next starts the file copy to the local source directory. After that completes, just click Finish.

Your boot image will then appear in the list in the right pane.

wds_bootimagelist

Linking Systems to Active Directory Accounts in WDS

This section is technically optional. If you follow this part, new systems will deploy with a computer name that you select and into the Active Directory OU that you select. If you skip this part, computers will deploy with a generic name into the default OU (usually Computers). I always use these steps to pre-stage my computer accounts and then leave them linked.

    1. Gather the system’s BIOS UUID.
      1. If it’s a Windows system and it’s already deployed, you can collect its UUID from an elevated PowerShell prompt with: 
        gwmi Win32_ComputerSystemProduct | select UUID.
      2. If it’s a Linux system and it’s already deployed… well… you can’t push Linux with WDS so this article is probably not very helpful for you. But you can still get its BIOS UUID by installing the dmidecode package from your distribution’s repository and running: 
        sudo dmidecode s systemuuid.
      3. If the system is physical and not yet deployed, you can set it to boot to PXE and start it up. When you see something like the following, record the GUID:
        PXE GUID
        PXE GUID
      4. If the system is a Hyper-V guest and not yet deployed, you can extract its GUID using WMI. That’s non-trivial enough that I wrote an entire script to not only extract the UUID but to update an Active Directory computer account with it. If the Hyper-V VM in question was copied from another, then it likely has an identical UUID which will cause problems. I have a PowerShell script and an EXE utility that will allow you to modify the UUID.
      5. If the system is a guest on another hypervisor, search for a technique to determine the UUID. You can usually try the PXE method with a VM as well.
    2. Decide on a name for the system, if you don’t already have one in mind/in production. If it doesn’t already exist, create that computer account in Active Directory in the desired OU.
    3. On the computer account’s properties sheet in Active Directory Users and Computers, switch to the Attribute Editor tab.
    4. Double-click the netbootGUID item. Leave the drop-down on Hexadecimal. Paste/type in the UUID without spaces or dashes.
      wds_netbootguidattred_new
    5. If you look at it in the list, it will have reversed a bunch of the characters. I don’t know why it does this, but that’s how WDS works:
      wds_netbootinlist
    6. In WDS, click on Active Directory Prestaged Devices. You should now see your new/updated device on the list. Double-click it to check the UUID. If it’s incorrect, just paste over the correct UUID. You don’t need to worry about braces or dashes; it will figure it out. You can’t use spaces, though. Remember that when you open it, WDS will have reversed several of the characters. Don’t try to make it stop. If you succeed, it won’t match when it boots.
      wds_fixeduuid
    7. Switch to the Boot tab. Set the PXE mode to whatever suits you; just make sure that you pick something that allows you to start in PXE mode or you’ll have trouble following the rest of this article. Select a boot image or it won’t work at all.
      wds_pxemodeandbootwim

WDS does include an Add Device “wizard”. It’s not very smart, though. You need to know the distinguished name (DN) of the target OU. That’s not tough. But, you can only use it to add devices. You can’t work with systems that exist in the directory but don’t have their netbootGUID property set. If you install ADUC on the WDS system, then it injects a new dialog into the new computer account wizard. That dialog allows you to set the GUID more easily but doesn’t let you set the boot.wim. The way that I showed you is a bit clumsier, but it’s easier overall and universally applicable.

Preparing a System to use as a WDS Image

In this article, I will cover the creation of hardware-specific images. It will include images that are locked to a single computer and images that apply to any computer of that type. I will save a discussion on completely generic WDS image creation for an article on using WDS to deploy virtual machines.

  1. Plan what you want in your image beside Windows Server. Consider:
    1. Roles and Features (including Hyper-V)
    2. Software packages — updating these will be a chore, but is that worse than needing to install fresh on a new server build? That question isn’t rhetorical. You need to decide.
    3. Drivers — same as software packages, but with a twist. Some driver packages can be automatically applied by WDS during deployment
    4. Dormant files — BIOS updates, automated scripts, etc. These sorts of things can also be pushed into a WIM
    5. Make sure everything that you want to keep is/will be on the C: drive. Each indexed image in a WIM file can only contain one file system layout. Make sure that anything that you don’t want to keep it off of/will not be placed on the C: drive. For instance, if you’re capturing an existing Hyper-V host, don’t leave any VMs on it (unless you really want them in the image…?).
  2. Install the system and get it exactly the way that you want it to be saved. Domain membership state doesn’t really matter, but later steps will be easier if it’s in the domain.
  3. Decide now if you want it to be completely generic or locked to this specific piece of hardware.
    1. If you want it to to be locked to this piece of hardware, you’re done with this section.
    2. If you want it to be genericized for all computers of the same hardware type, open an elevated command prompt and run: 
      C:Windowssystem32Sysprepsysprep.exe /generalize /oobe /shutdown.
  4. You need to be able to access the computer’s console, whether physically or by some out-of-band technique, to proceed.

Pick one of the following two paths to take, depending on what you chose in step 3.

Capturing a Specific System for WDS

I need to be very clear about what we’re doing: we are capturing the image of a system as it is currently installed and saving it for WDS to redeploy back to the exact same system at a later date. This is uncommon, atypical, nonstandard, apocryphal, frowned upon, side-eye-inducing, and generally weird. This is what I use for my lab Hyper-V hosts for a variety of reasons. If you later try to deploy it to another system… I don’t know what will happen. For Hyper-V virtual machines, I also use this when I want to make a Generation 1 VM into a Generation 2 VM. Sometimes I have to disjoin/rejoin the system after conversion, but the process has always worked well for me.

  1. Decide how you will save the captured image: local disk, USB-attached disk, or network.
    1. For a local disk, you don’t have to do anything unless there is a reason that it wouldn’t load in the Windows Server repair console. It’s perfectly fine to have it write to its own C: drive even if you’re capturing the C: drive, provided that C: has enough room to hold a copy of itself. You’re just saving a file with the current contents of the partition; you don’t need a completely empty disk or anything like that. If possible, choose this route as it will be the fastest. You can boot the system normally and transfer the file over the network later if that’s its ultimate destination.
    2. For USB-connected storage, get the device at least ready. I’ve had some servers that wouldn’t boot at all if a non-bootable USB disk was attached anywhere. The disk needs to be formatted as NTFS and has enough room to hold the contents of C: in a single file.
    3. For a network target, you might need to load the network driver. I would plan for that to happen, as I’ve had a server auto-detect its network once but then at the next iteration, the same server would not.
      1. You could just have a disk ready (USB or CD or whatever) with the files on hand. You’ll need the raw .inf, .cat, and .sys files; an installer won’t work.
      2. If the system is booted into Windows, or another just like it, all you need to do is look at the currently-installed driver
        wds_driverwin
      3. Look in C:WindowsSystem32DriverStoreFileRepository for a folder that starts with the same name as the driver’s primary file. If you want, you can copy the driver files out of that location to external media. If you’re going to be capturing from the same system, you’ll have an opportunity to load the driver right from that location later. Just make sure that you remember the driver’s file name so that you can find it.
        wds_driverwinfilelist
  2. At the system console, boot the computer to install media. You can use a DVD or follow the instructions in the first half of the linked article to create a bootable USB install disk.
  3. When you get to the Install Now screen, click Repair Your Computer instead.
    wds_repairlink
  4. Click the Troubleshoot option:
    wds_troubleshootoption
  5. Click Command Prompt.
    wds_commandprompt
  6. Next, you need to determine which drive letter holds what you normally think of as the C: drive. I’m betting on D: or E:. Type 
    dir e: and press [Enter]. You can also use DISKPART, which you’ll see a couple of steps down.
    wds_findsysdrive
  7. If that doesn’t work, go through the drive letters until you find it.
  8. Now that you have your source, you need to determine your target.
    1. If it’s a local or USB-attached disk, you could scan drive letters like you did for the C: drive. You can also use DISKPART. Type 
      diskpart and wait for the interface to load. Then type 
      list volume and look at the output for the drive that you want to use, as well as the letter that’s assigned to it. 
      exit when done (I don’t have an external USB so my output is underwhelming):
      wds_diskpart
    2. If you want to use a network target, first check to see if you have an IP with 
      ipconfig. If you get absolutely nothing, that’s because your network driver wasn’t loaded. If you got an IP, skip past these next sub-directions.

      1. Your goal is to run drvload with the .inf file for your network adapter. If you have it on external storage and know the drive letter/path, then 
        drvload z:driverdriverfile.inf will have you on your way.
      2. If you don’t have it on external storage, then hopefully you took my advice in step 1 and know the local path for the pre-existing driver. Make certain to use tab completion, and load it with something that looks like this: 
        drvload E:WindowsSystem32DriverStoreFileRepositorye1d65x64.inf_amd64_2de580a425dabd06e1d65x64.inf. You should be rewarded with a message that the driver loaded:
        wds_drvload
      3. Now you need to start the DHCP service: 
        net start dhcp.
      4. You should find yourself with an IP. If you don’t, then it’s likely an issue with the DHCP server. If you haven’t got a DHCP server, then… all of this will ultimately be for naught anyway. If you just want to give it a manual IP: 
        netsh int ipv4 set address “Ethernet” static 192.168.25.147 255.255.255.0 192.168.25.1 (substitute in a valid IP, mask, and gateway for your network, of course).
      5. Start the Workstation service so you can talk to an SMB server: 
        net start workstation. I was not ever able to get DNS resolution to work. If you want to even attempt it, you’ll need to start the DNS client: 
        net start dnscache
    3. Map a drive for the target. I was not able to use DNS on my system, so I went by IP: 
      net use w:\192.168.25.27System.
    4. You will be prompted for credentials. Remember to use DOMAINuser or credentials that are local to the target system.
      wds_map
  9. At this point, you have your target. All of the hard work is done! Save your system’s image:

  10. Relax, find something else to do. This will take some time.
    wds_completeduniqueimage
  11. You can now 
    exit back to the previous screen when you can shut down or continue to boot into the already-installed environment.
  12. The generated WIM file needs to be placed somewhere that the WDS server can reach it. I’ll cover adding images to the WDS server’s repository after I explain how to capture a generic image.

Once the image is added to the WDS server’s list, you can boot this server in PXE mode and reinstall the exact image that you just captured.

Capturing a System to use as a Generic Template in WDS

This section seems similar to the preceding section but has many differences. First and foremost, this practice is common. WDS has a built-in process for it, which might make things a bit easier. Primarily, the difference is the purpose of the captured image. You start with a specific system and turn it into a generic image. It’s useful when you have a particular hardware type — in my case, Dell PowerEdge T20s — and building an image directly from media will not suffice. For instance, you might need or want to have some pre-installed software that can’t be applied to an offline image.

You have two options. One is to sysprep the system and then follow the instructions from the previous system. The benefit to that route is that you have more options as to the location of the saved image. The second is to allow the WDS server to capture the image. That will place the image directly in the WDS server’s image repository, saving you a few steps. It also transfers the image over the network, which might not be optimal. However, WDS’s normal deployment mode is via network, so if it’s not good enough for capture, it’s probably not good enough for deployment, either.

Note: After doing this, UEFI (including Generation 2 VMs) may fail to boot to the WDS server. I followed the instructions posted by “jpdand” on the Microsoft forums to solve the problem.

To capture using the WDS server:

  1. In WDS, start in the Boot Images section. Instructions for creating boot images were presented in an earlier section. Find the boot image for the operating system that you want to capture. Right-click it and click Create Capture Image.
    wds_createcapimagestart
  2. Give the boot image an obvious name. Save it to a location outside the normal WDS tree because it will first create a clone of the image and then import it back into the store… don’t ask me why it can’t do it all at once.
    wds_capturecreatewim
  3. WDS will take a few minutes to build the image. When it completes, check Add image to the Windows Deployment Server now, then click Finish.
    wds_finishcapturebootim
  4. It will have pre-populated the source location with the file that you created. Click Next.
    wds_importbootcap1
  5. It will also pre-populate the name and description. Change if you like, then click Next.
    wds_importbootcap2
  6. Click Next and Finish to complete the process. WDS will import the file back into the boot images. It takes time because it is de-duplicating it against the ones that it already has.
  7. The captured image should now appear in your boot list.
    wds_bootimagewithcap
  8. Find the pre-staged item that matches your source system (alternatively, you can set the server to respond to unknown clients). On the Boot tab, select the image that you just imported.
    wds_setcap
  9. Optional: If you will be uploading the captured image directly to the WDS server, you’ll want to create an install image group to contain the uploaded image (unless you already have a group ready). Just right-click on the Install Images tree node and click Add Image Group. You get a small dialog to enter the name. I’m creating a Windows Server 2016 image, so I called mine “WS2016”. Whatever you do, I recommend you create install image groups based on the operating system type for reasons that I’ll explain in a later section.
  10. You’ll need to be at the console of the source system so that you can configure it for PXE boot and connect it to WDS during the PXE cycle.
  11. Sysprep the source system. At an elevated prompt, run: 
    sysprep /oobe /generalize /reboot. Substitute in “shutdown” for “reboot” if you need to do other things before it starts.
  12. Ensure that the system boots in PXE mode.
  13. Select the appropriate boot image. I’m not sure why it shows all of them since you manually selected one, but…
    wds_selectboot
  14. You will be brought to the first page of a wizard. Click Next.
  15. A drop-down box will be presented that shows all of the located sysprepped volumes. Usually, there’s only one. It probably will not show a drive letter of C:, so don’t worry about that. Choose the drive letter, then enter a name and a description for the final install image (it can be the same as the boot image that you created earlier if you want).
    wds_captureinstallimageinfo
  16. Now you need to decide where to place the created image. You can browse to a local location or a network location. Most of the drive letters that you see will belong to the boot image and are not writable, so be careful. If you want to save the image to the same drive that’s being imaged, it’s the Volume to capture from the previous page. If you want to upload to the WDS server, then enter the name of that server and click Connect. Then, choose one of the install image groups to hold it. Note: If you enter something for the WDS server, it will do the file save and then transmit it; this is not an either/or dialog. I filled out all the fields of the dialog to show you what it looks like. Myself, I will clear the upload box and perform the import of the WDS image later.
    wds_captargetlocation
  17. Relax, find something else to do. This will take some time. When the process completes and you click Finish, the system will reboot to its sysprepped state.
  18. If you didn’t upload it directly, the generated WIM file needs to be placed somewhere that the WDS server can reach it. I’ll cover adding images to the WDS server’s repository in the next section.
  19. Retarget the boot.wim file for this server (undo step 8).

Once the image is added to the WDS server’s list, you can boot this server in PXE mode and reinstall the exact image that you just captured.

Adding Install Images to a WDS Server

One way or another, you should have at least one install image now. Install images contain the operating system that will be installed along with anything else that you placed inside the image. Adding images is a very simple process.

  1. First, you need an install group, if you don’t already have one. WDS will deduplicate the images inside an install group, so organize them by the operating system. You can create the install image groups separately, or while you are importing an install image. To create one separately, right-click Install Images and click Add Image Group. You’ll get a small dialog where you can enter the name for your group.
  2. To add an image, right-click Install Images and click Add Install Image.
    wds_addinstallimage
  3. Choose or create the image group to add the image to.
    wds_targetimagegroup
  4. Locate the install image file that was saved from a previous section.
    wds_sourceimagefile
  5. Since we created WIMs from live systems, they only contain a single image. If you like the name that you chose, leave the Use the default name and description for each of the selected images box checked.
    wds_sourceimageindex
  6. If you didn’t leave that box checked, you’re now looking at a screen that asks you to enter a name and description. I didn’t do that, so I have no screen shot.
  7. After that, just Next and Finish. The file will be imported. It will take some time as it prepares/calculates deduplication info.

That’s all it takes to add images. Now, when systems boot, they’ll have each of these items as possible install images… maybe. You can override that.

Setting Install Image Filters

So, I showed you how to create an image just for a specific machine. You really don’t want other machines installing that image, do you? Use filters to lock an image to machines that match particular criteria.

  1. In WDS, click Active Directory Prestaged Devices. Find the computer object that you want to scope to. Open its Properties dialog. On the General page, highlight its Device Id. Copy it to the clipboard, then close the dialog.
  2. In the Install Images list on your WDS server, find the image that you want to modify. Right-click on it and click Properties.
  3. Switch to the Filters tab.
  4. Click Add.
  5. Set the Filter type to UUID and leave the Operator at Equal to. Paste the Device Id that you copied to the text box and click Add.
    wds_uuidfilter
  6. You can add other filters if you like. If you want to change an existing filter, for instance, to add another UUID, then you edit the existing filter. The dialog won’t allow you to add another instance of the same filter.
    wds_filterlist

Now, when the system with that UUID connects to the PXE server, it, and only it, will be able to choose that install image.

wds_scopedinstallwimchoice

Look through the tabs for other options. For instance, on User Permissions, you can restrict an image so that only certain Active Directory user accounts can install it.

Install an Image from a WDS Server

And now, the moment that you’ve been working up to installing an image from your WDS system.

  1. Pre-flight check:
    1. A DHCP server is available and visible to the layer 2 network that the target server is connected to
    2. The WDS server allows connections (PXE Response tab of the server’s properties dialog)
    3. You have configured the pre-staged device account
    4. You have configured the boot.wim and scoped the pre-staged device to it
    5. You have an install image for this server and, if you enable a filter, it matches the server
    6. You have access to the server’s console, directly or via an out-of-band management tool
    7. The server is set to PXE boot
  2. Boot the server in PXE mode. It should connect to the server and start downloading the boot file. Depending on how you set your process, it may jump right into the download or you may need to press a specific key.
    wds_pxescreen
  3. You will then be taken to some screens that like similar to a regular media-based install. Start with the language selection. If you are booting a UEFI/Generation 2 VM and it fails to reach this point, you may have some troubleshooting ahead of you. I followed many articles before learning that the WDS server sometimes corrupts its own files. I fixed that with the instructions by “jpdand” on the Microsoft forums.
    wds_setup1
  4. You will be prompted for domain credentials. Remember that install images can be scoped to specific accounts/groups, so choose accordingly.
    wds_credentials
  5. Next, you will choose the image to install.
    wds_scopedinstallwimchoice
  6. From this point onward, everything will work like a standard install of your chosen operating system. You’ll set up partitions and choose which to install to, etc. I’m assuming you’ve done all that before and don’t need my hope.

Once the install has completed, your system will boot. It will be a member of the domain, have the name that you assigned, and be in the VLAN that you assigned. Any software and settings that were in the original image will be present in the newly-installed environment. Networking often needs some help, unless you’re using DHCP.

That’s it! Windows Deployment Service takes some effort to fully set up and configure, but once it’s done, you’ll be very glad that you have it. No more digging for install media!

Have any questions or feedback?

Leave a comment below!


6 Hardware Tweaks that will Skyrocket your Hyper-V Performance

17 Aug 2017 by Eric Siron     0     Hyper-V Articles

Few Hyper-V topics burn up the Internet quite like “performance”. No matter how fast it goes, we always want it to go faster. If you search even a little, you’ll find many articles with long lists of ways to improve Hyper-V’s performance. The less focused articles start with general Windows performance tips and sprinkle some Hyper-V-flavored spice on them. I want to use this article to tighten the focus down on Hyper-V hardware settings only. That means it won’t be as long as some others; I’ll just think of that as wasting less of your time.

1. Upgrade your system

I guess this goes without saying but every performance article I write will always include this point front-and-center. Each piece of hardware has its own maximum speed. Where that speed barrier lies in comparison to other hardware in the same category almost always correlates directly with cost. You cannot tweak a go-cart to outrun a Corvette without spending at least as much money as just buying a Corvette — and that’s without considering the time element. If you bought slow hardware, then you will have a slow Hyper-V environment.

Fortunately, this point has a corollary: don’t panic. Production systems, especially server-class systems, almost never experience demand levels that compare to the stress tests that admins put on new equipment. If typical load levels were that high, it’s doubtful that virtualization would have caught on so quickly. We use virtualization for so many reasons nowadays, we forget that “cost savings through better utilization of under-loaded server equipment” was one of the primary drivers of early virtualization adoption.

2. BIOS Settings for Hyper-V Performance

Don’t neglect your BIOS! It contains some of the most important settings for Hyper-V.

  • C States. Disable C States! Few things impact Hyper-V performance quite as strongly as C States! Names and locations will vary, so look in areas related to Processor/CPU, Performance, and Power Management. If you can’t find anything that specifically says C States, then look for settings that disable/minimize power management. C1E is usually the worst offender for Live Migration problems, although other modes can cause issues.
  • Virtualization support: A number of features have popped up through the years, but most BIOS manufacturers have since consolidated them all into a global “Virtualization Support” switch, or something similar. I don’t believe that current versions of Hyper-V will even run if these settings aren’t enabled. Here are some individual component names, for those special BIOSs that break them out:
    • Virtual Machine Extensions (VMX)
    • AMD-V — AMD CPUs/mainboards. Be aware that Hyper-V can’t (yet?) run nested virtual machines on AMD chips
    • VT-x, or sometimes just VT — Intel CPUs/mainboards. Required for nested virtualization with Hyper-V in Windows 10/Server 2016
  • Data Execution Prevention: DEP means less for performance and more for security. It’s also a requirement. But, we’re talking about your BIOS settings and you’re in your BIOS, so we’ll talk about it. Just make sure that it’s on. If you don’t see it under the DEP name, look for:
    • No Execute (NX) — AMD CPUs/mainboards
    • Execute Disable (XD) — Intel CPUs/mainboards
  • Second Level Address Translation: I’m including this for completion. It’s been many years since any system was built new without SLAT support. If you have one, following every point in this post to the letter still won’t make that system fast. Starting with Windows 8 and Server 2016, you cannot use Hyper-V without SLAT support. Names that you will see SLAT under:
    • Nested Page Tables (NPT)/Rapid Virtualization Indexing (RVI) — AMD CPUs/mainboards
    • Extended Page Tables (EPT) — Intel CPUs/mainboards
  • Disable power management. This goes hand-in-hand with C States. Just turn off power management altogether. Get your energy savings via consolidation. You can also buy lower wattage systems.
  • Use Hyperthreading. I’ve seen a tiny handful of claims that Hyperthreading causes problems on Hyper-V. I’ve heard more convincing stories about space aliens. I’ve personally seen the same number of space aliens as I’ve seen Hyperthreading problems with Hyper-V (that would be zero). If you’ve legitimately encountered a problem that was fixed by disabling Hyperthreading AND you can prove that it wasn’t a bad CPU, that’s great! Please let me know. But remember, you’re still in a minority of a minority of a minority. The rest of us will run Hyperthreading.
  • Disable SCSI BIOSs. Unless you are booting your host from a SAN, kill the BIOSs on your SCSI adapters. It doesn’t do anything good or bad for a running Hyper-V host but slows down physical boot times.
  • Disable BIOS-set VLAN IDs on physical NICs. Some network adapters support VLAN tagging through boot-up interfaces. If you then bind a Hyper-V virtual switch to one of those adapters, you could encounter all sorts of network nastiness.

3. Storage Settings for Hyper-V Performance

I wish the IT world would learn to cope with the fact that rotating hard disks do not move data very quickly. If you just can’t cope with that, buy a gigantic lot of them and make big RAID 10 arrays. Or, you could get a stack of SSDs. Don’t get six or so spinning disks and get sad that they “only” move data at a few hundred megabytes per second. That’s how the tech works.

Performance tips for storage:

  • Learn to live with the fact that storage is slow.
  • Remember that speed tests do not reflect real world load and that file copy does not test anything except permissions.
  • Learn to live with Hyper-V’s I/O scheduler. If you want a computer system to have 100% access to storage bandwidth, start by checking your assumptions. Just because a single file copy doesn’t go as fast as you think it should, does not mean that the system won’t perform its production role adequately. If you’re certain that a system must have total and complete storage speed, then do not virtualize it. The only way that a VM can get that level of speed is by stealing I/O from other guests.
  • Enable read caches
  • Carefully consider the potential risks of write caching. If acceptable, enable write caches. If your internal disks, DAS, SAN, or NAS has a battery backup system that can guarantee clean cache flushes on a power outage, write caching is generally safe. Internal batteries that report their status and/or automatically disable caching are best. UPS-backed systems are sometimes OK, but they are not foolproof.
  • Prefer few arrays with many disks over many arrays with few disks.
  • Unless you’re going to store VMs on a remote system, do not create an array just for Hyper-V. By that, I mean that if you’ve got six internal bays, do not create a RAID-1 for Hyper-V and a RAID-x for the virtual machines. That’s a Microsoft SQL Server 2000 design. This is 2017 and you’re building a Hyper-V server. Use all the bays in one big array.
  • Do not architect your storage to make the hypervisor/management operating system go fast. I can’t believe how many times I read on forums that Hyper-V needs lots of disk speed. After boot-up, it needs almost nothing. The hypervisor remains resident in memory. Unless you’re doing something questionable in the management OS, it won’t even page to disk very often. Architect storage speed in favor of your virtual machines.
  • Set your fibre channel SANs to use very tight WWN masks. Live Migration requires a hand off from one system to another, and the looser the mask, the longer that takes. With 2016 the guests shouldn’t crash, but the hand-off might be noticeable.
  • Keep iSCSI/SMB networks clear of other traffic. I see a lot of recommendations to put each and every iSCSI NIC on a system into its own VLAN and/or layer-3 network. I’m on the fence about that; a network storm in one iSCSI network would probably justify it. However, keeping those networks quiet would go a long way on its own. For clustered systems, multi-channel SMB needs each adapter to be on a unique layer 3 network (according to the docs; from what I can tell, it works even with same-net configurations).
  • If using gigabit, try to physically separate iSCSI/SMB from your virtual switch. Meaning, don’t make that traffic endure the overhead of virtual switch processing, if you can help it.
  • Round robin MPIO might not be the best, although it’s the most recommended. If you have one of the aforementioned network storms, Round Robin will negate some of the benefits of VLAN/layer 3 segregation. I like least queue depth, myself.
  • MPIO and SMB multi-channel are much faster and more efficient than the best teaming.
  • If you must run MPIO or SMB traffic across a team, create multiple virtual or logical NICs. It will give the teaming implementation more opportunities to create balanced streams.
  • Use jumbo frames for iSCSI/SMB connections if everything supports it (host adapters, switches, and back-end storage). You’ll improve the header-to-payload bit ratio by a meaningful amount.
  • Enable RSS on SMB-carrying adapters. If you have RDMA-capable adapters, absolutely enable that.
  • Use dynamically-expanding VHDX, but not dynamically-expanding VHD. I still see people recommending fixed VHDX for operating system VHDXs, which is just absurd. Fixed VHDX is good for high-volume databases, but mostly because they’ll probably expand to use all the space anyway. Dynamic VHDX enjoys higher average write speeds because it completely ignores zero writes. No defined pattern has yet emerged that declares a winner on read rates, but people who say that fixed always wins are making demonstrably false assumptions.
  • Do not use pass-through disks. The performance is sometimes a little bit better, but sometimes it’s worse, and it almost always causes some other problem elsewhere. The trade-off is not worth it. Just add one spindle to your array to make up for any perceived speed deficiencies. If you insist on using pass-through for performance reasons, then I want to see the performance traces of production traffic that prove it.
  • Don’t let fragmentation keep you up at night. Fragmentation is a problem for single-spindle desktops/laptops, “admins” that never should have been promoted above first-line help desk, and salespeople selling defragmentation software. If you’re here to disagree, you better have a URL to performance traces that I can independently verify before you even bother entering a comment. I have plenty of Hyper-V systems of my own on storage ranging from 3-spindle up to >100 spindle, and the first time I even feel compelled to run a defrag (much less get anything out of it) I’ll be happy to issue a mea culpa. For those keeping track, we’re at 6 years and counting.

4. Memory Settings for Hyper-V Performance

There isn’t much that you can do for memory. Buy what you can afford and, for the most part, don’t worry about it.

  • Buy and install your memory chips optimally. Multi-channel memory is somewhat faster than single-channel. Your hardware manufacturer will be able to help you with that.
  • Don’t over-allocate memory to guests. Just because your file server had 16GB before you virtualized it does not mean that it has any use for 16GB.
  • Use Dynamic Memory unless you have a system that expressly forbids it. It’s better to stretch your memory dollar farther than wring your hands about whether or not Dynamic Memory is a good thing. Until directly proven otherwise for a given server, it’s a good thing.
  • Don’t worry so much about NUMA. I’ve read volumes and volumes on it. Even spent a lot of time configuring it on a high-load system. Wrote some about it. Never got any of that time back. I’ve had some interesting conversations with people that really did need to tune NUMA. They constitute… oh, I’d say about .1% of all the conversations that I’ve ever had about Hyper-V. The rest of you should leave NUMA enabled at defaults and walk away.

5. Network Settings for Hyper-V Performance

Networking configuration can make a real difference to Hyper-V performance.

  • Learn to live with the fact that gigabit networking is “slow” and that 10GbE networking often has barriers to reaching 10Gbps for a single test. Most networking demands don’t even bog down gigabit. It’s just not that big of a deal for most people.
  • Learn to live with the fact that a) your four-spindle disk array can’t fill up even one 10GbE pipe, much less the pair that you assigned to iSCSI and that b) it’s not Hyper-V’s fault. I know this doesn’t apply to everyone, but wow, do I see lots of complaints about how Hyper-V can’t magically pull or push bits across a network faster than a disk subsystem can read and/or write them.
  • Disable VMQ on gigabit adapters. I think some manufacturers are finally coming around to the fact that they have a problem. Too late, though. The purpose of VMQ is to redistribute inbound network processing for individual virtual NICs away from CPU 0, core 0 to the other cores in the system. Current-model CPUs are fast enough that they can handle many gigabit adapters.
  • If you are using a Hyper-V virtual switch on a network team and you’ve disabled VMQ on the physical NICs, disable it on the team adapter as well. I’ve been saying that since shortly after 2012 came out and people are finally discovering that I’m right, so, yay? Anyway, do it.
  • Don’t worry so much about vRSS. RSS is like VMQ, only for non-VM traffic. vRSS, then, is the projection of VMQ down into the virtual machine. Basically, with traditional VMQ, the VMs’ inbound traffic is separated across pNICs in the management OS, but then each guest still processes its own data on vCPU 0. vRSS splits traffic processing across vCPUs inside the guest once it gets there. The “drawback” is that distributing processing and then redistributing processing causes more processing. So, the load is nicely distributed, but it’s also higher than it would otherwise be. The upshot: almost no one will care. Set it or don’t set it, it’s probably not going to impact you a lot either way. If you’re new to all of this, then you’ll find an “RSS” setting on the network adapter inside the guest. If that’s on in the guest (off by default) and VMQ is on and functioning in the host, then you have vRSS. woohoo.
  • Don’t blame Hyper-V for your networking ills. I mention this in the context of performance because your time has value. I’m constantly called upon to troubleshoot Hyper-V “networking problems” because someone is sharing MACs or IPs or trying to get traffic from the dark side of the moon over a Cat-3 cable with three broken strands. Hyper-V is also almost always blamed by people that just don’t have a functional understanding of TCP/IP. More wasted time that I’ll never get back.
  • Use one virtual switch. Multiple virtual switches cause processing overhead without providing returns. This is a guideline, not a rule, but you need to be prepared to provide an unflinching, sure-footed defense for every virtual switch in a host after the first.
  • Don’t mix gigabit with 10 gigabit in a team. Teaming will not automatically select 10GbE over the gigabit. 10GbE is so much faster than gigabit that it’s best to just kill gigabit and converge on the 10GbE.
  • 10x gigabit cards do not equal 1x 10GbE card. I’m all for only using 10GbE when you can justify it with usage statistics, but gigabit just cannot compete.

6. Maintenance Best Practices

Don’t neglect your systems once they’re deployed!

  • Take a performance baseline when you first deploy a system and save it.
  • Take and save another performance baseline when your system reaches a normative load level (basically, once you’ve reached its expected number of VMs).
  • Keep drivers reasonably up-to-date. Verify that settings aren’t lost after each update.
  • Monitor hardware health. The Windows Event Log often provides early warning symptoms, if you have nothing else.

Further reading

If you carry out all (or as many as possible) of the above hardware adjustments you will witness a considerable jump in your hyper-v performance. That I can guarantee. However, for those who don’t have the time, patience or prepared to make the necessary investment in some cases, Altaro has developed an e-book just for you. Find out more about it here: Supercharging Hyper-V Performance for the time-strapped admin.

Have any questions or feedback?

Leave a comment below!

The Complete Guide to Hyper-V Networking

08 Aug 2017 by Eric Siron     0     Hyper-V Articles

I frequently write about all sorts of Hyper-V networking topics. I was surprised to learn that we’ve never published a unified article that gives a clear and complete how-to that brings all of these related topics into one resource. We’ll fix that right now.

Understanding the Basics of Hyper-V Networking

We have produced copious amounts of material explaining the various concepts around Hyper-V networking. I want to spend as little time as possible on that here. Comprehension is very important, though, so here’s an index of expository work:

  • How the Hyper-V Virtual Switch Works: If you don’t understand the contents of that article, you will have a very difficult time administering Hyper-V. Read it, and read it again until you have absorbed it. It answers easily 90% of the questions that I receive about Hyper-V networking. If something there doesn’t make sense, ask.
  • The OSI model and Hyper-V: A quick read on the OSI model and a brief introduction to its relevance to Hyper-V. If you’ve been skimming over the terms “layer 2” and “layer 3” because you don’t have a solid understanding of them, read it.
  • Hyper-V and VLANs: That article ties closely to the OSI article. VLANs are a layer 2 technology. Due to common usage, newcomers often confuse them with layer 3 operations. I’m frequently asked about trunking multiple VLANs into a virtual machine, even though I’m fairly certain that most people don’t really want to do that. This article should help you sort out those concepts.
  • Hyper-V and IP: That article also ties closely to the OSI article and contrasts against the VLAN article. It doesn’t contain a great deal of direct Hyper-V knowledge, but it should help fill any of the most serious deficiencies in TCP/IP comprehension.
  • Hyper-V and Link Aggregation (Teaming): That article describes the concepts around NIC teaming and addresses some of the myths that I encounter. The article that you’re reading now will bring you the “how”.
  • Hyper-V and DNS: If I were to compile a list of ridiculously simple technologies that people tend to ridiculously over-complicate, I’d place DNS in the top slot. Hyper-V itself cares nothing about DNS, but its management operating systems and guests care very much. Poor DNS configurations can be blamed for nearly all of the world’s technological ills. You must learn it. It won’t take long.
  • Hyper-V and Binding Order: Lots of administrators spend lots of time wringing their hands over network binding order. Stop. Only the DNS subsystem and one other thing (that I’ve now forgotten about) pay any attention to the binding order. If you get that, then you don’t really need to read the linked article.
  • Hyper-V and Load Balancing Algorithms: The “hows” of load balancing algorithms will be on display in the article that you’re reading. If you want to understand the “what” and the “why”, then follow the link.
  • Hyper-V and MPIO and Teaming for Storage: I see lots of complaints from people that create a switch independent team on a pair of 10GbE pipes that wind back to a storage array with 5x 10,000 RPM disks. They test it with a file copy and don’t understand why they can’t move 20Gbps. Invariably, they blame Hyper-V. If you don’t want to be that guy, the linked article should help.

That should serve as a decent reference on the concepts. If you don’t understand something written below, it’s probably because you don’t understand something linked above.

Contents of this Article

I will demonstrate common PowerShell and, where available, GUI methods for working with:

  • Standard network adapter teaming
  • Hyper-V virtual switch
  • Switch Embedded Teaming
  • Hyper-V virtual adapters

PowerShell or GUI?

Use PowerShell for quick, precise, repeatable, scriptable operations. Use the GUI to do the same amount of work in twice the time following four times as many instructions. I will show all of the PowerShell methods first for the benefit of those that just want to get things done. If you prefer to plod through dozens of GUI screens, scroll to the bottom half of the article. Be aware that many things can’t be done in the GUI.

If you’re just getting started with PowerShell, remember to use tab completion! It makes all the difference!

Creating and Working with Network Adapter Teams for Hyper-V in PowerShell

If you’re interested in Switch Embedded Teaming (Server 2016 only), then look a few headings downward. This section applies to the standard Microsoft teaming methods.

First things first. You need to know which adapters to add to the team. Discover your available adapters:

I’ll use my system for reference. I’ve renamed all of the adapters in my system so that I can recognize them. If your hardware supports Consistent Device Naming, then you’ll likely already have actionable names (like “Slot 4 Port 1”). If not, you’ll need to find your own way to identify adapters. I use my switch’s interface to enable the ports one at a time, identifying the adapters as they switch to Connected status.

Creating and Working with Network Adapter Teams for Hyper-V

The PowerShell cmdlets for networking allow you to use names, indexes, or descriptions to manipulate adapters. The teaming cmdlets only work with names.

Create a Windows Team

Create teams with New-NetLbfoTeam.

I use my demo machines’ “P*L” adapters for Hyper-V teams. One way to create a team for them:

I usually Name my team for the virtual switch that I create on it, but choose any name that you like. The TeamMembers field accepts a comma-separated list of the names of the physical adapters to add to the team. I promised not to go into detail on the options, and I won’t. Just remember that the other parameters and their values are selectable by tab completion. SwitchIndependent is the preferred teaming mode in most cases with LACP being second. I have never seen any compelling reason to use a load balancing algorithm other than Dynamic.

To save even more time and space, the cmdlet is smart enough to allow you to use wildcards for the adapter names:

If you want to avoid the prompt for scripting purposes, add the Force parameter.

A Note on the Team NIC

When you create a team, you also create a logical adapter that represents that team. A logical team NIC (often abbreviated as tNIC) works in a conceptually similar fashion to a Hyper-V virtual NIC. You treat it just like you would a physical adapter — give it an IP address, etc. The team determines what to do with your traffic. If you use the cmdlets as shown above, one team NIC will be created and it will have the same name as the team (“vSwitch”, in this case). You can override that name with the TeamNicName parameter.

You can also add more team NICs to a team. For a team that hosts a Hyper-V virtual switch, it’s neither recommended nor supported, although the system will allow it. Additional tNICs must be created in their own VLAN, which hides that VLAN from the team. Also, it’s not documented or clear how additional tNICs affect any QoS settings on a Hyper-V virtual switch.

For the rest of this article, the single automatically-created tNIC will be the only one referenced.

Examine Teams and tNICs

View all teams and their statuses with Get-NetLbfoTeam. You don’t need to supply any parameters. I get more use out of Get-NetLbfoTeamMember, also without parameters.

Remove and Add Team Members

You can easily remove team members if you have the need:

And add them:

Removing an adapter obviously disrupts the traffic on that member, but the team will handle it well. You can add a team member at any time.

Delete a Team

Use Remove-NetLbfoTeam to get rid of a team. You can use the Name parameter to reverse what you’ve done. Since my hosts only ever use a single team, I can do this:

Working with the Hyper-V Virtual Switch

I always use Hyper-V virtual switches and Microsoft teams together, so I have a certain technique. You may choose a different path. Just understand that external switches must be created on an adapter. I will always use the default tNIC. If you’re not teaming, then you’ll pick a single physical NIC. Use Get-NetAdapter as shown in the teaming section above to determine the name of the adapter that you wish to use.

Create a Virtual Switch

Use New-VMSwitch to create a new switch. Most commonly, you’ll want the external type (refer to the articles linked at the beginning if you need an explanation). External switches require you to specify a logical or physical (but not virtual) adapter. You can use its friendly name or its less friendly description. I use the name. In my case, I’m binding to a team’s logical adapter, so, as explained a bit ago, I’ll use the team’s name.

For internal or private, use the SwitchType parameter instead of the NetAdapterName parameter and do not use AllowManagementOS.

Several things to note about the New-VMSwitch cmdlet:

  • New-VMSwitch is not one of the better-developed cmdlets. Usually, when tabbing through available parameters, your options are presented in a sensible order. New-VMSwitch’s parameters are all over the place.
  • The documentation for every version of New-VMSwitch always says that the default MinimumBandwidthMode is Weight. I used to classify this as an error, but it’s been going on for so long I’m starting to wonder if it’s an unfunny inside joke or a deliberate lie. The default is Absolute. Most people won’t ever need QoS, so I don’t know that it has practical importance. However, you can’t change a switch’s QoS mode after it’s been created, so I’d rather tell you this up front.
  • The “AllowManagementOS” parameter’s name is nonsense. What it really means is “immediately create a virtual adapter for the management operating system”. The only reason that I don’t allow it to create one is because it uses the same name for the virtual adapter as the virtual switch. That’s confusing for people that don’t know how all of this works. You can always add virtual adapters later, so the “allow” verb makes no sense whatsoever.

Manipulate a Virtual Switch

Use Set-VMSwitch to make changes to your switch. The cmdlet has so many options that I can’t rationally explain it all. Just scan the parameter list to find what you want. A couple of notes, though:

  • You can’t change the QoS mode of an existing virtual switch.
  • You can switch between External, Internal, and Private types.
    • To go from External to either of the other types: Set-VMSwitch -Name vSwitch -SwitchType Internal. Just use Private instead of Internal if you want that switch type.
    • To from Private or Internal to External: Set-VMSwitch -Name vSwitch -NetAdapterName vSwitch. You’d also use this format to move a virtual switch from one physical/logical network adapter to another.
  • You can’t rename a virtual switch with this cmdlet. Use Rename-VMSwitch.

Remove a Virtual Switch

Appropriately enough, Remove-VMSwitch removes a virtual switch.

You can remove all virtual switches in one shot:

When a switch is removed, virtual NICs on VMs are disconnected. Virtual NICs for the management OS are destroyed.

Speaking of virtual NICs, that’s the next thing you care about if you’re using a standard virtual switch. I’ll explain them after the Switch Embedded Team section.

Working with Hyper-V Switch Embedded Teams

Server 2016 adds Switch Embedded Teaming. If you’re planning to create a team of gigabit adapters, then I recommend that you use the traditional teaming method outlined above. I wrote an article explaining why.

Create a Switch Embedded Team (SET)

Use the familiar New-VMSwitch to set it up, but add the EnableEmbeddedTeaming option. Two other options not shown in the following are EnableIov and EnablePacketDirect.

The documentation continues to be wrong on MinimumBandwidthMode. If you don’t specify otherwise, you get Absolute. Prefer Weight.

Use EnableIov if, and only if, you have 10GbE adapters that support it. I cannot find any details on Packet Direct anywhere. Everyone just repeats that it provides a low-latency connection that bypasses the virtual switch. A few sources add that it will force Hyper-V Port load balancing mode. My hardware doesn’t support it, so I can’t test it. I assume that it only works on 10GbE and probably only with SR-IOV.

Once a SET has been created, you view it with both Get-VMSwitch and Get-VMSwitchTeam. For whatever reason, they decided that the output should include the difficult-to-read interface descriptions instead of names like Get-NetLbfoTeam. You can see the adapter names with something like this:

The SET cmdlets have no analog for Get-NetLbfoTeamMember.

SET does not expose a logical adapter to Windows the way that LBFO does.

Manipulate a Switch Embedded Team

You can change the members and the load balancing mode for a SET using Set-VMSwitchTeam.

Add and Remove SET Members

Instead of Set-VMSwitchTeam, you can use Add-VMSwitchTeamMember and Remove-VMSwitchTeamMember.

Remove a SET

Use Remove-VMSwitch to remove a SET. There is no Remove-VMSwitchTeam cmdlet.

Working with Virtual Network Adapters

You can attach virtual network adapters (vNICs) to the management operating system or virtual machines. You’ll most commonly use them with virtual machines, but you’ll also usually do less work with them. Their default settings tend to be sufficient and you can work with them through their owning virtual machine’s GUI property pages.

For almost every vNIC-related cmdlet, you must indicate whether you’re working with a management OS vNIC or a VM’s vNIC. Do this with the ManagementOS switch parameter or by supplying a value for either the VM or the VMName parameters. If you have a vNIC object, such as the one output by Get-VMNetworkAdapter, then you can pipe it to most of the vNIC cmdlets or provide it as the VMNetworkAdapter parameter. You won’t need to specify any of the other identifying parameters, including those previously mentioned in this paragraph, when you provide the vNIC object.

View a Virtual Network Adapter

The simple act of creating a virtual machine or virtual switch with AllowManagementOS set, creates a vNIC. To view them all:

Ordinarily, we give descriptive names to management OS vNICs, especially when we use more than one. If you didn’t specify AllowManagementOS, then you’ll have a vNIC with the same name as your vSwitch.

Each management OS vNIC will appear in the Network Connections applet and Get-NetAdapter with the format vEthernet (vNICName). Avoid confusion by changing the default vNIC’s name (shown in a bit). Many newcomers believe that this vNIC is the virtual switch because of that name. You cannot “see” the virtual switch anywhere except in Hyper-V-specific management tools.

Ordinarily, we leave the default name of “Network Adapter” for virtual machine vNICs. New in 2016, changes to a guest’s vNIC name will appear in the guest operating system if it supports Consistent Device Naming (CDN).

Manipulate a Virtual Network Adapter

Use Set-VMNetworkAdapter to change vNIC settings. As you can see, this cmdlet is quite busy; I could write multiple full-length articles on various parameter groups. Settings categories available with this command:

  • Quality of service (Qos)
  • Security (MAC spoofing, router guard, DHCP guard, storm)
  • Replica
  • In-guest teaming
  • Performance (VMQ, IOV, vRSS, Packet Direct)

You need a different cmdlet for VLAN manipulation, though.

Manipulate Virtual Network Adapter VLANs

Use Set-VMNetworkAdapterVlan for all things VLAN on vNICs.

To place a management OS vNIC into a VLAN:

Remember that the VlanId parameter requires the Access parameter.

Also remember that there is no such thing as “VLAN 0”. For some unknown reason, the cmdlet will accept it and assign the adapter to VLAN 0, but strange things might happen. Usually, it’s just that you can’t get traffic in or out of the adapter. If you want to clear the adapter’s VLAN, don’t use VLAN 0. Use Untagged:

I’m not going to cover trunking or private VLANs. Trunking is very advanced and I don’t think more than 5 percent of the people that have asked me how to do it really wanted to do it. If you want a single virtual machine to exist in multiple VLANs, add virtual adapters and assign individual VLANs. Private VLANs require you to work with PrimaryVlanId, SecondaryVlanId, SecondaryVlanIdList, Promiscuous, Community, and Isolated as necessary. If you need to use private VLANs, then you or your networking engineer should already understand each of these terms and intuitively understand how to use the parameters.

Since we’re commonly asked, the Promiscuous parameter on Set-VMNetworkAdapterVlan does not have anything to do with accepting or participating in all passing layer 2 traffic. It is only for private VLANs.

Adding and Removing Virtual Network Adapters

Use Add-VMNetworkAdapter and Remove-VMNetworkAdapter for their respective tasks.

Connecting and Disconnecting Virtual Network Adapters to/from Virtual Switches

These cmdlets only work for virtual machine vNICs. You cannot dis/connect management OS vNICs; you can only add or remove them.

Connect always works. You do not need to disconnect an adapter from its current virtual switch to connect it to a new one. If you want to connect all of a VM’s vNICs to the same switch, specify only the virtual machine in VMName.

If you provide the Name parameter, then only that vNIC will be altered:

These two cmdlets do not provide a VM parameter. It is possible for two virtual machines to have the same name. If you need to discern between two VMs with the same name, use the pipeline and filter from other cmdlets:

Use Disconnect-VMNetworkAdapter the same way, leaving off the SwitchName parameter.

VLAN information is preserved across dis/connects.

Other vNIC Settings

I did not touch on the entire range of possible vNIC cmdlets or their settings. You can go to the root 2016 Hyper-V PowerShell page and view all available cmdlets. Search the page for adapter, and you’ll find many hits.

Using the GUI for Hyper-V Networking

The GUI lags dramatically behind PowerShell for most things related to Hyper-V. I doubt any category shows that as strongly as networking. So, whether you (or I) like it or not, using the GUI for Hyper-V network qualifies as “beginner mode”. Most of the things that I showed you above cannot be done in the GUI at all. So, unless you’re managing a single host with a single network adapter, the GUI will probably not help you much.

The following sections show you the few things that you can do in the GUI.

Working with Windows Teams

The GUI does allow you some decent capability when working with Windows teams.

Create a Windows Team

You can use the GUI to create teams on Server 2012 and later. You can find the applet in Server Manager on the Local Server tab.

Using the GUI for Hyper-V Networking

You can also run lbfoadmin.exe from the Run window or an elevated prompt.

Once open, click the Tasks drop-down in the Teams section. Click New Team.

Tasks drop-down in the Teams section

You’ll get the NIC Teaming/New team dialog, where you’ll need to fill out most fields:

NIC Teaming/New team

Manipulate a Team

To make changes to your team later, just return to the same screens and dialogs using the same methods as you used to create the team.

Manipulate a Team

Delete a Team

To delete a team, use the Delete function in the same place on the main lbfoadmin screen where you found the New Team function. Make sure to highlight the team that you want to delete, first.

Delete a Team

Working with the Hyper-V Virtual Switch

The GUI provides very limited ability to work with Hyper-V virtual switches. You can’t configure QoS (except on vNICs) and it allows nearly nothing to be done for management OS vNICs.

Create a Hyper-V Virtual Switch

When using the Add Roles wizard to enable Hyper-V, you can create a virtual switch. I won’t cover that. If you’re looking at that screen, wondering what to do, I recommend that you skip it and follow the PowerShell directions above. If you simply must use a GUI, then wait until after the role finishes installing and create one using Hyper-V Manager.

To create a new virtual switch in Hyper-V Manager:

  1. Right-click the host in Hyper-V Manager and click Virtual Switch Manager. Alternatively, you’ll find this same menu at the far right of the main screen under Actions.
    Working with the Hyper-V Virtual Switch
  2. At the left of the dialog, highlight New virtual network switch.
    Create a Hyper-V Virtual Switch
  3. On the right, choose the type of switch that you want to create. I’m not entirely sure why it even asks because you can pick anything you want once you click Create Virtual Switch.
    Create a Hyper-V Virtual Switch Type
  4. The creation screen itself is very busy. I’ll tackle that in a moment. First, look to the left of the dialog at the blue text. It’s a new entry named New Virtual Switch. It represents what you’re working on now. If you change the name, you’ll see this list item change as well. You can use Apply to make changes and continue working without closing the dialog. You can even add another switch before you accept this one.
    New Virtual Switch

Now for the new switch screen. Look after the screenshot for an explanation of the items:

Virtual Switch properties - Hyper-V Networking

First item: name your switch.

I would skip the notes field, especially in a failover cluster.

For Connection Type, you’re decided between External, Internal, and Private. That’s why I don’t understand why it asked you on the initial dialog. If you choose External, you’ll need to pick a logical or physical adapter for binding. Unfortunately, you can only see the fairly useless adapter description fields. Look in the Network Connections applet to determine which is which. This right here is one of the primary reasons I like switch creation in PowerShell better.

Remember that the IOV setting is permanent.

I despise the item here called Allow management operating system to share this network adapter. That description has absolutely no relation to what the checkbox does. If you check it, it will automatically create a virtual NIC in the management OS for this virtual switch and give it the same name as the virtual switch. That’s all it does. There is no “sharing”, and there is no permanent allowing or disallowing going on.

The VLAN ID section ties to the nonsensical “Allow…” field. If you let the system create a management OS vNIC for you, then you can use this to give it a VLAN ID.

You can use the Remove button if you decide that you don’t want to create the virtual switch after all. Cancel would work, too.

Where’s the QoS? Oh, you can’t set the QoS mode for a virtual switch using the GUI. PowerShell only. If you use this screen to create a virtual switch, it will use the Absolute QoS mode. Forever. Another reason to choose PowerShell.

Manipulate a Virtual Switch

To make changes to a virtual switch, follow the exact steps that you did to create one, except choose the existing virtual switch at the left of the Virtual Switch Manager dialog. Of course, you can’t change much, but there it is.

Remove a Virtual Switch

Retrace your creation steps. Select the virtual switch at the left of the Virtual Switch Manager screen. Click the Remove button at the bottom right.

Working with Hyper-V Switch Embedded Teams

You can’t use the GUI to work with Hyper-V SET. PowerShell-only.

You can use the Virtual Switch Manager as described previously to remove one, though.

Working with Hyper-V Virtual Network Adapters

The GUI provides passably decent ability to work with vNICs — for guests. The only place that you can do anything with management OS vNICs is on that virtual switch creation screen. You can add or remove exactly one vNIC and you can set or remove its VLAN. You can’t use the GUI to work with two or more management OS vNICs. In fact, if you use PowerShell to add a second management OS vNIC, all related items in the dialog are grayed out and unusable.

But, for virtual machines, the GUI exposes most functionality.

Manipulate Virtual Network Adapters on Virtual Machines

In Hyper-V Manager or Failover Cluster Manager, open up the Settings dialog for the virtual machine to work with. On the left, you can find the vNIC that you want to work with. Highlight it, and the page will switch to its configuration screen. In the following screenshot, I’ve also expanded the vNIC so that you can see its subtabs, Hardware Acceleration and Advanced Features.

Manipulate Virtual Network Adapters on Virtual Machines

On this screen, you can change the virtual switch this adapter connects to, or disconnect it. You can change or remove its VLAN. You can set its QoS. The numbers here are given in Absolute since that’s the default. It doesn’t change if your switch uses Weight mode. I would use PowerShell for that. You can also Remove the vNIC here.

The Hardware Acceleration subtab:

Hardware Acceleration

Here, you can change:

  • If a VMQ can be assigned to this vNIC. The host’s adapters must support VMQ and a queue must be available for this checkbox to have any effect.
  • IPSec task offloading. If the host’s physical adapter supports IPSec task offloading and has sufficient resources, the guest can offload IPSec tasks to the hardware.
  • An SR-IOV virtual function can be assigned to this NIC. The host’s adapters and motherboard must support IOV, it must be enabled on the adapter and in BIOS, the virtual switch must either be unteamed or on a SET, and a virtual function must be available for this checkbox to have any effect.

The Advanced Features subtab:

Advanced Features

Note that this screen scrolls, and I didn’t capture it all.

Here, you can change:

  • MAC address, mode and address both
  • Whether or not the guest can spoof the MAC
  • If the guest is prevented from receiving DHCP discover/request frames
  • If the guest is prevented from receiving router discovery packets
  • If a failover cluster will move the guest if it loses network connectivity (Protected network)
  • If the vNIC’s traffic is mirrored to another vNIC. This feature seems to have troubles, FYI.
  • If teaming is allowed in the guest. The guest requires at least two vNICs and the virtual switch must be placed on a team or SET for this to function.
  • The Device naming switch allows the name of the vNIC to be propagated into the guest where an OS that supports Consistent Device Naming (CDN) can use it. Note that this is disabled by default, and the GUI doesn’t allow you to rename the vNIC. Use PowerShell for that.

Remove a Virtual Network Adapter

To remove a vNIC from a guest, find its tab in the VM’s settings dialog in Hyper-V Manager or Failover Cluster Manager. Use the Remove button at the bottom right. You’ll find a screenshot above in the Manipulate Virtual Network Adapters on Virtual Machines section.

Note: This guide will be periodically updated to make sure it covers all possible Hyper-V Networking problems. If you think I’ve missed anything please let me know in the comments below.

Have any questions or feedback?

Leave a comment below!