Tag Archives: HyperV

The ABC of Hyper-V – 7 Steps to Get Started

So you’ve installed Hyper-V, now what? Here’s 7 steps to get you get up and running in no time and set yourself on the path to Hyper-V greatness. Includes basic steps to setting up a VM, configuring settings, basic networking tips and next steps.

Read the post here: The ABC of Hyper-V – 7 Steps to Get Started

[VIDEO] Hyper-V Masterclass – Debunking Virtual Domain Controller Myths

Should Hyper-V be in the domain? Can Hyper-V host its own domain controller? Eric Siron confronts some potentially crippling myths about Hyper-V and domain controllers in this video and also boots up an instance to put these mistruths to rest

Read the post here: [VIDEO] Hyper-V Masterclass – Debunking Virtual Domain Controller Myths

Looking at the Hyper-V Event Log (January 2018 edition)

Hyper-V has changed over the last few years and so has our event log structure. With that in mind, here is an update of Ben’s original post in 2009 (“Looking at the Hyper-V Event Log”).

This post gives a short overview on the different Windows event log channels that Hyper-V uses. It can be used as a reference to better understand which event channels might be relevant for different purposes.

As a general guidance you should start with the Hyper-V-VMMS and Hyper-V-Worker event channels when analyzing a failure. For migration-related events it makes sense to look at the event logs both on the source and destination node.

Windows Event Viewer showing the Hyper-V-VMMS Admin log

Below are the current event log channels for Hyper-V. Using “Event Viewer” you can find them under “Applications and Services Logs”, “Microsoft”, “Windows”.
If you would like to collect events from these channels and consolidate them into a single file, we’ve published a HyperVLogs PowerShell module to help.

Event Channel Category Description
Hyper-V-Compute Events from the Host Compute Service (HCS) are collected here. The HCS is a low-level management API.
Hyper-V-Config This section is for anything that relates to virtual machine configuration files. If you have a missing or corrupt virtual machine configuration file – there will be entries here that tell you all about it.
Hyper-V-Guest-Drivers Look at this section if you are experiencing issues with VM integration components.
Hyper-V-High-Availability Hyper-V clustering-related events are collected in this section.
Hyper-V-Hypervisor This section is used for hypervisor specific events. You will usually only need to look here if the hypervisor fails to start – then you can get detailed information here.
Hyper-V-StorageVSP Events from the Storage Virtualization Service Provider. Typically you would look at these when you want to debug low-level storage operations for a virtual machine.
Hyper-V-VID These are events form the Virtualization Infrastructure Driver. Look here if you experience issues with memory assignment, e.g. dynamic memory, or changing static memory while the VM is running.
Hyper-V-VMMS Events from the virtual machine management service can be found here. When VMs are not starting properly, or VM migrations fail, this would be a good source to start investigating.
Hyper-V-VmSwitch These channels contain events from the virtual network switches.
Hyper-V-Worker This section contains events from the worker process that is used for the actual running of the virtual machine. You will see events related to startup and shutdown of the VM here.
Hyper-V-Shared-VHDX Events specific to virtual hard disks that can be shared between several virtual machines. If you are using shared VHDs this event channel can provide more detail in case of a failure.
Hyper-V-VMSP The VM security process (VMSP) is used to provide secured virtual devices like the virtual TPM module to the VM.
Hyper-V-VfpExt Events form the Virtual Filtering Platform (VFP) which is part of the Software Defined Networking Stack.
VHDMP Events from operations on virtual hard disk files (e.g. creation, merging) go here.

Please note: some of these only contain analytic/debug logs that need to be enabled separately and not all channels exist on Windows client. To enable the analytic/debug logs, you can use the HyperVLogs PowerShell module.

Alles Gute,

Lars

The Really Simple Guide to Hyper-V Networking

If you’re just getting started with Hyper-V and struggling with the networking configuration, you are not alone. I (and others) have written a great deal of introductory material on the subject, but sometimes, that’s just too much. I’m going to try a different approach. Rather than a thorough deep-dive on the topic that tries to cover all of the concepts and how-to, I’m just going to show you what you’re trying to accomplish. Then, I can just link you to the necessary supporting information so that you can make it into reality.

Getting Started

First things first. If you have a solid handle on layer 2 and layer 3 concepts, that’s helpful. If you have experience networking Windows machines, that’s also helpful. If you come to Hyper-V from a different hypervisor, then that knowledge won’t transfer well. If you apply ESXi networking design patterns to Hyper-V, then you will create a jumbled mess that will never function correctly or perform adequately.

Your Goals for Hyper-V Networking

You have two very basic goals:

  1. Ensure that the management operating system can communicate on the network
  2. Ensure that virtual machines can communicate on the network

rsn_goals

Any other goals that you bring to this endeavor are secondary, at best. If you have never done this before, don’t try to jump ahead to routing or anything else until you achieve these two basic goals.

Hyper-V Networking Rules

Understand what you must, can, and cannot do with Hyper-V networking:

What the Final Product Looks Like

It might help to have visualizations of correctly-configured Hyper-V virtual switches. I will only show images with a single physical adapter. You can use a team instead.

Networking for a Single Hyper-V Host, the Old Way

An old technique has survived from the pre-Hyper-V 2012 days. It uses a pair of physical adapters. One belongs to the management operating system. The other hosts a virtual switch that the virtual machines use. I don’t like this solution for a two adapter host. It leaves both the host and the virtual machines with a single point of failure. However, it could be useful if you have more than two adapters and create a team for the virtual machines to use. Either way, this design is perfectly viable whether I like it or not.

rsn_vswitch_split

Networking for a Single Hyper-V Host, the New Way

With teaming, you can just join all of the physical adapters together and let it host a single virtual switch. Let the management operating system and all of the guests connect through it.

rsn_vswitch_unified

Networking for a Clustered Hyper-V Host

For a stand-alone Hyper-V host, the management operating system only requires one connection to the network. Clustered hosts benefit from multiple connections. Before teaming was directly supported, we used a lot of physical adapters to make that happen. Now we can just use one big team to handle our host and our guest traffic. That looks like this:

rns_vswitch_cluster

VLANs

VLANs seem to have some special power to trip people up. A few things:

  • The only purpose of a VLAN is to separate layer 2 (Ethernet) traffic.
  • VLANs are not necessary to separate layer 3 (IP) networks. Many network administrators use VLANs to create walls around specific layer 3 networks, though. If that describes your network, you will need to design your Hyper-V hosts to match. If your physical network doesn’t use VLANs, then don’t worry about them on your Hyper-V hosts.
  • Do not create one Hyper-V virtual switch per VLAN the way that you configure ESXi. Every Hyper-V virtual switch automatically supports untagged frames and VLANs 1-4096.
  • Hyper-V does not have a “default” VLAN designation.
  • Configure VLANs directly on virtual adapters, not on the virtual switch.

Other Quick Pointers

I’m going to provide you with some links so you can do some more reading and get some assistance with configuration. However, some quick things to point out:

  • The Hyper-V virtual switch does not have an IP address of its own.
  • You do not manage the Hyper-V virtual switch via an IP or management VLAN. You manage the Hyper-V virtual switch using tools in the management or a remote operating system (Hyper-V Manager, PowerShell, and WMI/CIM).
  • Network connections for storage (iSCSI/SMB): Preferably, network connections for storage will use dedicated, unteamed physical adapters. If you can’t do that, then you can create dedicated virtual NICs in the management operating system
  • Multiple virtual switches: Almost no one will ever need more than one virtual switch on a Hyper-V host. If you have VMware experience, especially do not create virtual switches just for VLANs.
  • The virtual machines’ virtual network adapters connect directly to the virtual switch. You do not need anything in the management operating system to assist them. You don’t need a virtual adapter for the management operating system that has anything to do with the virtual machines.
  • Turn off VMQ for every gigabit physical adapter that will host a virtual switch. If you team them, the logical team NIC will also have a VMQ setting that you need to disable.

For More Information

I only intend for this article to be a quick introduction to show you what you’re trying to accomplish. We have several articles to help you dive into the concepts and the necessary steps for configuration.

Device Naming for Network Adapters in Hyper-V 2016

Not all of the features introduced with Hyper-V 2016 made a splash. One of the less-published improvements allows you to determine a virtual network adapter’s name from within the guest operating system. I don’t even see it in any official documentation, so I don’t know what to officially call it. The related settings use the term “device naming”, so we’ll call it that. Let’s see how to put it to use.

Requirements for Device Naming for Network Adapters in Hyper-V 2016

For this feature to work, you need:

  • 2016-level hypervisor: Hyper-V Server, Windows Server, Windows 10
  • Generation 2 virtual machine
  • Virtual machine with a configuration version of at least 6.2
  • Windows Server 2016 or Windows 10 guest

What is Device Naming for Hyper-V Virtual Network Adapters?

You may already be familiar with a technology called “Consistent Device Naming”. If you were hoping to use that with your virtual machines, sorry! The device naming feature utilized by Hyper-V is not the same thing. I don’t know for sure, but I’m guessing that the Hyper-V Integration Services enable this feature.

Basically, if you were expecting to see something different in the Network and Sharing Center, it won’t happen:

harn_nscenterNor in Get-NetAdapter:

harn_getnetadapter

In contrast, a physical system employing Consistent Device Naming would have automatically named the network adapters in some fashion that reflected their physical installation. For example, “SLOT 4 Port 1” would be the name of the first port of a multi-port adapter installed in the fourth PCIe slot. It may not always be easy to determine how the manufacturers numbered their slots and ports, but it helps more than “Ethernet 5”.

Anyway, you don’t get that out of Hyper-V’s device naming feature. Instead, it shows up as an advanced feature. You can see that in several ways. First, I’ll show you how to set the value.

Setting Hyper-V’s Network Device Name in PowerShell

From the management operating system or a remote PowerShell session opened to the management operating system, use Set-VMNetworkAdapter:

This enables device naming for all of the virtual adapters connected to the virtual machine named sv16g2.

If you try to enable it for a generation 1 virtual machine, you get a clear error (although sometimes it inexplicably complains about the DVD drive, but eventually it gets where it’s going):

The cmdlet doesn’t know if the guest operating system supports this feature (or even if the virtual machine has an installed operating system).

If you don’t want the default “Virtual Network Adapter” name, then you can set the name at the same time that you enable the feature:

These cmdlets all accept pipeline information as well as a number of other parameters. You can review the TechNet article that I linked in the beginning of this section. I also have some other usage examples on our omnibus networking article.

Reminder: PowerShell is the only way to set the name of a Hyper-V virtual network adapter.

Note: You must reboot the guest operating system for it to reflect the change.

Setting Hyper-V’s Network Device Name in the GUI

You can use Hyper-V Manager or Failover Cluster Manager to enable this feature. Just look at the bottom of the Advanced Features sub-tab of the network adapter’s tab. Check the Enable device naming box. If that box does not appear, you are viewing a generation 1 virtual machine.

ndn_gui

Reminder: PowerShell is the only way to set the name of a Hyper-V virtual network adapter. See the preceding section for instructions.

Note: You must reboot the guest operating system for it to reflect the change.

Viewing Hyper-V’s Network Device Name in the Guest GUI

This will only work in Windows 10/Windows Server 2016 (GUI) guests. The screenshots in this section were taken from a system that still had the default name of Network Adapter.

  1. Start in the Network Connections window. Right-click on the adapter and choose Properties:
    ndn_netadvprops
  2. When the Ethernet # Properties dialog appears, click Configure:
    ndn_netpropsconfbutton
  3. On the Microsoft Hyper-V Network Adapter Properties dialog, switch to the Advanced tab. You’re looking for the Hyper-V Network Adapter Name property. The Value holds the name that Hyper-V holds for the adapter:
    ndn_display

If the Value field is empty, then the feature is not enabled for that adapter or you have not rebooted since enabling it. If the Hyper-V Network Adapter Name property does not exist, then you are using a down-level guest operating system or a generation 1 VM.

Viewing Hyper-V’s Network Device Name in the Guest with PowerShell

As you saw in the preceding section, this field appears with the adapter’s advanced settings. Therefore, you can view it with the Get-NetAdapterAdvancedProperty cmdlet. To see all of the settings for all adapters, use that cmdlet by itself.

ndn_psall

Tab completion doesn’t work for the names, so drilling down just to that item can be a bit of a chore. The long way:

Slightly shorter way:

One of many not future-proofed-but-works-today way:

For automation purposes, you need to query the DisplayValue or the RegistryValue property. I prefer the DisplayValue. It is represented as a standard System.String. The RegistryValue is represented as a System.Array of System.String (or, String[]). It will never contain more than one entry, so dealing with the array is just an extra annoyance.

To pull that field, you could use select (an alias for Select-Object), but I wouldn’t:

ndn_psselectobject

I don’t like select in automation because it creates a custom object. Once you have that object, you then need to take an extra step to extract the value of that custom object. The reason that you used select in the first place was to extract the value. select basically causes you to do double work.

So, instead, I recommend the more .Net way of using a dot selector:

You can store the output of that line directly into a variable that will be created as a System.String type that you can immediately use anywhere that will accept a String:

Notice that I injected the Name property with a value of Ethernet. I didn’t need to do that. I did it to ensure that I only get a single response. Of course, it would fail if the VM didn’t have an adapter named Ethernet. I’m just trying to give you some ideas for your own automation tasks.

Viewing Hyper-V’s Network Device Name in the Guest with Regedit

All of the network adapters’ configurations live in the registry. It’s not exactly easy to find, though. Navigate to HKEY_LOCAL_MACHINESYSTEMCurrentControlSetControlClass{4d36e972-e325-11ce-bfc1-08002be10318}. Not sure if it’s a good thing or a bad thing, but I can identify that key on sight now. Expand that out, and you’ll find several subkeys with four-digit names. They’ll start at 0000 and count upward. One of them corresponds to the virtual network adapter. The one that you’re looking for will have a KVP named HyperVNetworkAdapterName. Its value will be what you came to see. If you want further confirmation, there will also be KVP named DriverDesc with a value of Microsoft Hyper-V Network Adapter (and possibly a number, if it’s not the first).

How to build a bulletproof Hyper-V failover cluster

workload is to deploy a Hyper-V failover cluster.

Failover clusters ensure Hyper-V VMs continue to run when a problem knocks a host out of commission. But admins need to set up the cluster properly — paying special attention to the network configuration — to make sure the Hyper-V cluster and apps inside the VM will perform at an optimal level in production.

Get to know the Hyper-V cluster traffic types

To optimize Hyper-V failover cluster performance, admins must understand Hyper-V traffic types and configure Hyper-V networking based on the requirements. Hyper-V uses a physical network adapter for network traffic types, such as cluster, live migration, VM communication, storage, Hyper-V Replica and Hyper-V management traffic.

The cluster service monitors the availability of all nodes in the cluster when it sends a packet via the physical network adapter. A node that doesn’t respond with a health check — known as a cluster heartbeat — is removed from the cluster.

Hyper-V’s live migration feature moves a VM to another Hyper-V node in the cluster if a failure occurs. To do this, Hyper-V uses the physical network adapter used by the other Hyper-V traffic types. If Scale-Out File Server or iSCSI Target Server is deployed, Hyper-V will use the same physical network adapter to communicate with the SOFS cluster or iSCSI Target Server. Similarly, Hyper-V Replica and management network traffic types use the same physical network adapter.

While it’s possible to run all Hyper-V network traffic types through a single network adapter, this configuration might not be suitable for production environments. Some networking applications that run inside the VMs want a dedicated network queue to avoid communication delays. Multiple physical network adapters also help to live-migrate VMs as quickly as possible and avoid disruption. Some IT shops prefer a separate physical network adapter dedicated to management traffic.

To isolate Hyper-V network traffic, install the appropriate number of physical network adapters on the Hyper-V host, map each one to a Hyper-V virtual switch and then assign a unique subnet to each virtual network adapter. To complete the setup, use Failover Cluster Manager or Hyper-V Manager to configure the network settings to isolate network traffic.

For example, to isolate live migration traffic, open Failover Cluster Manager, right-click on Networks in the left navigation pane, click on Live Migration Settings and select a network adapter. In a similar fashion, admins can use Failover Cluster Manager to isolate cluster traffic. Go to Networks, right-click on a network and click on Properties. On the Properties page, select Allow cluster network communication on this network, and uncheck Allow clients to connect through this network. This dedicates a network for cluster-specific communications.

NIC teaming delivers resiliency

Microsoft introduced network interface card teaming with Windows Server 2012 to let admins group several virtual NICs across different physical NICs to add redundancy.

NIC teaming in Windows Server 2012 provides Hyper-V Port load balancing to distribute the VM network traffic based on the VM’s MAC address. Hyper-V Port load balancing uses a round-robin method to distribute VMs across the NIC team; an active network adapter in the team handles outbound traffic of the VMs.

NIC teaming supports two modes: switch-dependent and switch-independent. Switch-dependent mode requires a virtual switch to participate in the team. Admins can use switch-independent mode if the network adapters connect to a different virtual switch, each with no function or participation. For Hyper-V clusters, it’s recommended to use switch-independent mode with Hyper-V Port load balancing.

VMQ avoids unnecessary congestion

If the Hyper-V host has network adapters with the Virtual Machine Queue (VMQ) feature, the admins should enable it.

VMQ establishes a dedicated queue on the physical network adapter to transmit data directly to virtual network adapters — rather than route them to the management OS first — and prevents communication delays.

High availability through a Hyper-V failover cluster offers some level of assurance to the business, but admins should learn about the available settings and options to keep applications in VMs in the cluster operating at the expected level.