Category Archives: Hyper-V

What’s new in Hyper-V for Windows 10 Fall Creators Update?

Windows 10 Fall Creators Update has arrived!  While we’ve been blogging about new features as they appear in Windows Insider builds, many of you have asked for a consolidated list of Hyper-V updates and improvements since Creators Update in April.

Summary:

  • Quick Create includes a gallery (and you can add your own images)
  • Hyper-V has a Default Switch for easy networking
  • It’s easy to revert virtual machines to their start state
  • Host battery state is visible in virtual machines
  • Virtual machines are easier to share

Quick Create virtual machine gallery

The virtual machine gallery in Quick Create makes it easy to find virtual machine images in one convenient location.

image

You can also add your own virtual machine images to the Quick Create gallery.  Building a custom gallery takes some time but, once built, makes creating virtual machines easy and consistent.

This blog post walks through adding custom images to the gallery.

For images that aren’t in the gallery, select “Local Installation Source” to create a virtual machine from an .iso or vhd located somewhere in your file system.

Keep in mind, while Quick Create and the virtual machine gallery are convenient, they are not a replacement for the New Virtual Machine wizard in Hyper-V manager.  For more complicated virtual machine configuration, use that.

Default Switch

The switch named “Default Switch” allows virtual machines to share the host’s network connection using NAT (Network Address Translation).  This switch has a few unique attributes:

  1. Virtual machines connected to it will have access to the host’s network whether you’re connected to WIFI, a dock, or Ethernet. It will also work when the host is using VPN
    or proxy.
  2. It’s available as soon as you enable Hyper-V – you won’t lose internet setting it up.
  3. You can’t delete or rename it.
  4. It has the same name and device ID on all Windows 10 Fall Creator’s Update Hyper-V hosts.
    Name: Default Switch
    ID: c08cb7b8-9b3c-408e-8e30-5e16a3aeb444

Yes, the default switch does automatically assign an IP to the virtual machine (DNS and DHCP).

I’m really excited to have a always-available network connection for virtual machines on Hyper-V.  The Default Switch offers the best networking experience for virtual machines on a laptop.  If you need highly customized networking, however, continue using Virtual Switch Manager.

Revert! (automatic checkpoints)

This is my personal favorite feature from Fall Creators Update.

For a little bit of background, I mostly use virtual machines to build/run demos and to sandbox simple experiments.  At least once a month, I accidently mess up my virtual machine.  Sometimes I remember to make a checkpoint and I can roll back to a good state.  Most of the time I don’t.  Before automatic checkpoints, I’d have to choose between rebuilding my virtual machine or manually undoing my mistake.

Starting in Fall Creators Update, Hyper-V creates a checkpoint when you start virtual machines.  Say you’re learning about Linux and accidently `rm –rf /*` or update your guest and discover a breaking change, now you can simply revert back to when the virtual machine started.

image

Automatic checkpoints are enabled by default on Windows 10 and disabled by default on Windows Server.  They are not useful for everyone.  For people with automation or for those of you worried about the overhead of making a checkpoint, you can disable automatic checkpoints with PowerShell (Set-VM –Name VMwithAutomation –AutomaticCheckpointsEnabled) or in VM settings under “Checkpoints”.

Here’s a link to the original announcement with more information.

Battery pass-through

Virtual machines in Fall Creators Update are aware of the hosts battery state.

imageThis is nice for a few reasons:

  1. You can see how much battery life you have left in a full-screen virtual machine.
  2. The guest operating system knows the battery state and can optimize for low power situations.

Easier virtual machine sharing

Sharing your Hyper-V virtual machines is easier with the new “Share” button. Share packages and compresses your virtual machine so you can move it to another Hyper-V host right from Virtual Machine Connection.

image

Share creates a “.vmcz” file with your virtual hard drive (vhd/vhdx) and any state the virtual machine will need to run.  “Share” will not include checkpoints. If you would like to also export your checkpoints, you can use the “Export” tool, or the “Export-VM” PowerShell cmdlet.

clip_image002

Once you’ve moved your virtual machine to another computer with Hyper-V, double click the “.vmcz” file and the virtual machine will import automatically.

—-

That’s the list!  As always, please send us feedback via FeedbackHub.

Curious what we’re building next?  Become a Windows Insider – almost everything here has benefited from your early feedback.

Cheers,
Sarah

Create your custom Quick Create VM gallery

Have you ever wondered whether it is possible to add your own custom images to the list of available VMs for Quick Create?

The answer is: Yes, you can!

Since quite a few people have been asking us, this post will give you a quick example to get started and add your own custom image while we’re working on the official documentation. The following two steps will be described in this blog post:

  1. Create JSON document describing your image
  2. Add this JSON document to the list of galleries to include

Step 1: Create JSON document describing your image

The first thing you will need is a JSON document which describes the image you want to have showing up in quick create. The following snippet is a sample JSON document which you can adapt to your own needs. We will publish more documentation on this including a JSON schema to run validation as soon as it is ready.

To calculate the SHA256 hashes for the linked files you can use different tools. Since it is already available on Windows 10 machines, I like to use a quick PowerShell call: Get-FileHash -Path .contoso_logo.png -Algorithm SHA256
The values for logo, symbol, and thumbnail are optional, so if there are no images at hand, you can just remove these values from the JSON document.

Step 2: Add this JSON document to the list of galleries to include

To have your custom gallery image show up on a Windows 10 client, you need to set the GalleryLocations registry value under HKEY_LOCAL_MACHINESOFTWAREMicrosoftWindows NTCurrentVersionVirtualization.
There are multiple ways to achieve this, you can adapt the following PowerShell snippet to set the value:

If you don’t want to include the official Windows 10 developer evaluation images, just remove the fwlink from the GalleryLocations value.

Have fun creating your own VM galleries and stay tuned for our official documentation. We’re looking forward to see what you create!

Lars

13 Questions Answered on the Future of Hyper-V

Following our hugely popular panel webinar discussing “3 Emerging Technologies that will Change the Way you use Hyper-V” we’ve decided to bring together all of the questions asked during both sessions (we hold 2 separate webinar sessions on the same topic to accommodate our European and American audiences) into one article with some extended answers to address the issue of what’s around the corner for Hyper-V and related technologies.

Let’s get started!

questions from the webinar 3 emerging technologies that will change the way you use hyper-v

The Questions

Question 1: Do you think IT Security is going to change as more and more workloads move into the cloud?

Answer: Absolutely! As long as we’re working with connected systems, no matter where they are located, we will always have to worry about security. 1 common misconception though is that just because a workload is housed inside of Microsoft Azure, doesn’t mean that it’s LESS secure. Public cloud platforms have been painstakingly setup from the ground up with the help of security experts in the industry. You’ll find that if best practices are followed, and rules of least access and just-in-time administration are followed, the public cloud is a highly secure platform.

Question 2: Do you see any movement to establish a global “law” of data security/restrictions that are not threatened by local laws (like the patriot act)?

Answer: Until all countries of the world are on the same page, I just don’t see this happening. The US treats data privacy in a very different way than the EU unfortunately. The upcoming General Data Protection Regulation (GDPR) coming in may of 2018 is a step in the right direction, but that only applies to the EU and data traversing the boundaries of the EU. It will certainly affect US companies and organizations, but nothing similar in nature is in the works there.

Question 3: In the SMB Space, where a customer may only have a single MS Essentials server and use Office 365, do you feel that this is still something that should move to the cloud?

Answer: I think the answer to that question depends greatly on the customer and the use case. As Didier, Thomas and I discussed in the webinar, the cloud is a tool, and you have to evaluate for each case, whether it makes sense or not to run that workload in the cloud. If for that particular customer, they could benefit from those services living in the cloud with little downside, then it may be a great fit. Again, it has to make sense, technically, fiscally, and operationally, before you can consider doing so.

Question 4: What exactly is a Container?

Answer: While not the same at all, it’s often easiest to see a container as a kind of ultra-stripped down VM. A container holds an ultra-slim OS image (In the case of Nano Server 50-60 MB), any supporting code framework, such as DotNet, and then whatever application you want to run within the container. They are not the same as a VM due to the fact that Windows containers all share the kernel of the underlying host OS. However, if you require further isolation, you can do so with Hyper-V containers, which allows you to run a container within an optimized VM so you can take advantage of Hyper-V’s isolation capabilities.

Question 5: On-Premises Computing is Considered to be a “cloud” now too correct?

Answer: That is correct! In my view, the term cloud doesn’t refer to a particular place, but to the new technologies and software-defined methods that are taking over datacenters today. So you can refer to your infrastructure on-prem as “private cloud”, and anything like Azure or AWS as “Public Cloud”. Then on top of that anything that uses both is referred to as “Hybrid Cloud”.

Question 6: What Happens when my client goes to the cloud and they lose their internet service for 2 weeks.

Answer: The cloud, just like any technology solution, has its shortcomings that can be overcome if planned for properly. If you have mission critical service you’d like to host in the cloud, then you’ll want to research ways for the workload to be highly available. That would include a secondary internet connection from a different provider or some way to make that workload accessible from the on-prem location if needed. Regardless of where the workload is, you need to plan for eventualities like this.

Question 7: What Happened to Azure Pack?

Answer: Azure Pack is still around and usable, it will just be replaced by Azure stack at some point. In the meantime, there are integrations available that allow you to manage both solutions from your Azure Stack management utility.

Question 8: What about the cost of Azure Stack? What’s the entry point?

Answer: This is something of a difficult question. Ranges that I’ve heard range from 75k to 250k, depending on the vendor and the load-out. You’ll want to contact your preferred hardware vendor for more information on this question.

Question 9: We’re a hosting company, is it possible to achieve high levels of availability with Azure Stack?

Answer: Just like any technology solution, you can achieve the coveted 4 9s of availability. The question is how much money do you want to spend? You could do so with Azure stack and the correct supporting infrastructure. However, one other thing to keep in mind, your SLA is only as good as your supporting vendors as well. For example, if you sell 4 9s as an SLA, and the internet provider for your datacenter can only provide 99%, then you’ve already broken your SLA, so something to keep in mind there.

Question 10: For Smaller businesses running Azure Stack, should software vendors assume these businesses will look to purchase traditionally on-prem software solutions that are compatible with this? My company’s solution does not completely make sense for the public cloud, but this could bridge the gap. 

Answer: I think for most SMBs, Azure Stack will be fiscally out of reach. In Azure Stack you’re really paying for a “Cloud Platform”, and for most SMBs it will make more sense to take advantage of public Azure if those types of features are needed. that said, to answer your question, there are already vendors doing this. Anything that will deploy on public Azure using ARM will also deploy easily on Azure Stack.

Question 11: In Azure Stack, can I use any backup software and backup the VM to remote NAS storage or to AWS

Answer: At release, there is no support for 3rd party backup solutions in Azure Stack. Right now there is a built-in flat file backup and that is it. I suspect that it will be opened up to third-party vendors at some point in time and it will likely be protected in much the same way as public Azure resources.

Question 12: How would a lot of these services be applied to the K-12 education market? There are lots of laws that require data to be stored in the same country. Yet providers often host in a different country. 

Answer: If you wanted to leverage a providers Azure stack somewhere, you would likely have to find one that actually hosts it in the geographical region you’re required to operate in. Many hosters will provide written proof of where the workload is hosted for these types of situations.

Question 13: In planning to move to public Azure, how many Azure cloud Instances would I need?

Answer: There is no hard set answer for this. It depends on the number of VMs/Applications and whether you run them in Azure as VMs or in Azure’s PaaS fabric. The Azure Pricing Calculator will give you an idea of VM sizes and what services are available.

Did we miss something?

If you have a question on the future of Hyper-v or any of the 3 emerging technologies that were discussed in the webinar just post in the comments below and we will get straight back to you. Furthermore, if you asked a question during the webinar that you don’t see here, by all means, let us know in the comments section below and we will be sure to answer it here. Any follow-up questions are also very welcome – to feel free to let us know about that as well!

As always – thanks for reading!

How to Hot Add/Remove Virtual Network Adapters in Hyper-V 2016

Last week I showed you how to hot add/remove memory in Hyper-V 2016 and this week I’m covering another super handy new feature that system admins will also love. In fact, Hyper-V 2016 brought many fantastic features. Containers! It also added some features that indicate natural product maturation. On that list, we find “hot add/remove of virtual network adapters”. If that’s not obvious, it means that you can now add or remove virtual network adapters to/from running virtual machines.

Requirements for Hyper-V Hot Add/Remove of Virtual Network Adapters

To make hot add/remove of network adapters work in Hyper-V, you must meet these requirements:

  • Hypervisor must be 2016 version (Windows 10, Windows Server 2016, or Hyper-V Server 2016)
  • Virtual machine must be generation 2
  • To utilize the Device Naming feature, the virtual machine version must be at least 6.2. The virtual machine configuration version does not matter if you do not attempt to use Device Naming. Meaning, you can bring a version 5.0 virtual machine over from 2012 R2 to 2016 and hot add a virtual network adapter. A discussion on Device Naming will appear in a different article.

The guest operating system may need an additional push to realize that a change was made. I did not encounter any issues with the various operating systems that I tested.

How to Use PowerShell to Add or Remove a Virtual Network Adapter from a Running Hyper-V Guest

I always recommend PowerShell to work with second or higher network adapters to a virtual machine. Otherwise, they’re all called “Network Adapter”. Sorting that out can be unpleasant.

Adding a Virtual Adapter with PowerShell

Use Add-VMNetworkAdapter to add a network adapter to a running Hyper-V guest. That’s the same command that you’d use for an offline guest, as well. I don’t know why the authors chose the verb “Add” instead of “New”.

The above will work on a virtual machine with a configuration version of at least 6.2. If the virtual machine is set to a lower version, you get a rather confusing message that talks about DVD drives:

It does eventually get around to telling you exactly what it doesn’t like. You can avoid this error by not specifying the DeviceNaming parameter. If you’re scripting, you can avoid the parameter by employing splatting or by setting DeviceNaming to Off.

You can use any of the other parameters of Add-VMNetworkAdapter normally.

Removing a Virtual Adapter with PowerShell

To remove the adapter, use Remove-VMNetworkAdapter:

This is where things can get… interesting. Especially if you didn’t specify a unique name for the adapter. The Name parameter works like a search filter; it will remove any adapter that perfectly matches that name. So, if all of the virtual machine’s network adapters use the default name Network Adapter, and you specify Network Adapter for the Name parameter, then all of that VM’s adapters will be removed.

To address that issue, you’ll need to employ some cleverness. A quick ‘n’ dirty option would be to just remove all of the adapters, then add one. By default, that one adapter will pick up an IP from an available DHCP server. Since you can specify a static MAC address with the StaticMacAddress parameter of Add-VMNetworkAdapter, you can control that behavior with reservations.

You could also filter adapters by MAC address:

You could also use arrays to selectively remove items:

You could even use a loop to knock out all adapters after the first:

In my unscientific testing, virtual machine network adapters are always stored and retrieved in the order in which they were added, so the above script should always remove every adapter except the original. Based on the file format, I would expect that to always hold true. However, no documentation exists that outright supports that; use this sort of cleverness with caution.

I recommend naming your adapters to save a lot of grief in these instances.

How to Use the GUI to Add or Remove a Virtual Network Adapter from a Running Hyper-V Guest

These instructions work for both Hyper-V Manager and Failover Cluster Manager. Use the virtual machine’s Settings dialog in either tool.

Adding a Virtual Network Adapter in the GUI

Add a virtual network adapter to a running VM the same way that you add one to a stopped VM:

  1. On the VM’s Settings dialog, start on the Add Hardware page. The Network Adapter entry should be black, not gray. If it’s gray, then the VM is either Generation 1 or not in a valid state to add an adapter:
    harn_newhardware
  2. Highlight Network Adapter and click Add.
  3. You will be taken to a screen where you can fill out all of the normal information for a network adapter. Set all items as desired.
    harn_newadapter
  4. Once you’ve set everything to your liking, click OK to add the adapter and close the dialog or Apply to add the adapter and leave the dialog open.

Removing a Virtual Network Adapter in the GUI

As with adding an adapter, removing an adapter for a running virtual machine is performed the same way as adding one:

  1. Start on the Settings dialog for the virtual machine. Switch to the tab for the adapter that you wish to remove:
    harn_addedadapter
  2. Click the Remove button.
    harn_removeadapter
  3. The tab for the adapter to be removed will have all of its text crossed out. The dialog items for it will turn gray.
    harn_removeadapterpending
  4. Click OK to remove the adapter and close the dialog or Apply to remove the adapter and leave the dialog open. Click Cancel if you change your mind. For OK or Apply, a prompt will appear with a warning that you’ll probably disrupt network communications:
    harn_removeprompt

Hot Add/Remove of Hyper-V Virtual Adapters for Linux Guests

I didn’t invest a great deal of effort into testing, but this feature works for Linux guests with mixed results. A Fedora guest running on my Windows 10 system was perfectly happy with it:

harn_linux

OpenSUSE Leap… not so much:

harn_noleap

But then, I added another virtual network adapter to my OpenSUSE system. This time, I remembered to connect it to a virtual switch before adding. It liked that much better:

harn_leapon

So, the moral of the story: for Linux guests, always specify a virtual switch when hot adding a virtual network card. Connecting it afterward does not help.

Also notice that OpenSUSE Leap did not ever automatically configure the adapter for DHCP, whereas Fedora did. As I mentioned in the beginning of the article, you might need to give some environments an extra push.

Also, Leap seemed to get upset when I hot removed the adapter:

harn_leapout

To save your eyes, the meat of that message says: “unable to send revoke receive buffer to netvsp”. I don’t know if that’s serious or not. The second moral of this story, then: hot removing network adapters might leave some systems in an inconsistent, unhappy state.

My Thoughts on Hyper-V’s Hot Add/Remove of Network Adapters Feature

Previous versions of Hyper-V did not have this feature and I never missed it. I wasn’t even aware that other hypervisors had it until I saw posts from people scrounging for any tiny excuse to dump hate on Microsoft. Sure, I’ve had a few virtual machines with services that benefited from multiple network adapters. However, I knew of that requirement going in, so I just built them appropriately from the beginning. I suppose that’s a side effect of competent administration. Overall, I find this feature to be a hammer desperately seeking a nail.

That said, it misses the one use that I might have: it doesn’t work for generation 1 VMs. As you know, a generation 1 Hyper-V virtual machine can only PXE boot from a legacy network adapter. The legacy network adapter has poor performance. I’d like to be able to remove that legacy adapter post-deployment without shutting down the virtual machine. That said, it’s very low on my wish list. I’m guessing that we’ll eventually be using generation 2 VMs exclusively, so the problem will handle itself.

During my testing, I did not find any problems at all using this feature with Windows guests. As you can see from the Linux section, things didn’t go quite as well there. Either way, I would think twice about using this feature with production systems. Network disruptions do not always behave exactly as you might think because networks often behave unexpectedly. Multi-homed systems often crank the “strange” factor up somewhere near “haunted”. Multi-home a system and fire up Wireshark. I can almost promise that you’ll see something that you didn’t expect within the first five minutes.

I know that you’re going to use this feature anyway, and that’s fine; that’s why it’s there. I would make one recommendation: before removing an adapter, clear its TCP/IP settings and disconnect it from the virtual switch. That gives the guest operating system a better opportunity to deal with the removal of the adapter on familiar terms.

A great way to collect logs for troubleshooting

Did you ever have to troubleshoot issues within a Hyper-V cluster or standalone environment and found yourself switching between different event logs? Or did you repro something just to find out not all of the important Windows event channels had been activated?

To make it easier to collect the right set of event logs into a single evtx file to help with troubleshooting we have published a HyperVLogs PowerShell module on GitHub.

In this blog post I am sharing with you how to get the module and how to gather event logs using the functions provided.

Step 1: Download and import the PowerShell module

First of all you need to download the PowerShell module and import it.

Step 2: Reproduce the issue and capture logs

Now, you can use the functions provided as part of the module to collect logs for different situations.
For example, to investigate an issue on a single node, you can collect events with the following steps:

Using this module and its functions made it a lot easier for me to collect the right event data to help with investigations. Any feedback or suggestions are highly welcome.

Cheers,
Lars

How to Hot Add/Remove Memory in Hyper-V 2016

We all make mistakes. Sometimes big ones, sometimes little ones. What happens when you don’t provision for the correct amount of memory for a Hyper-V virtual machine? The process normally includes scheduling downtime, bringing the VM down, changing the memory allocation, bringing it up, and then performing your post-downtime procedure. A new feature in Hyper-V 2016 substantially eases the pain. With some guest operating systems, you can increase or decrease the amount of allocated memory on the fly.

Requirements for Hyper-V Hot Add/Remove Memory

Most of the requirements for hot add/remove of memory follow common sense expectations:

  • Hypervisor must be 2016 version (Windows 10, Windows Server 2016, or Hyper-V Server 2016)
  • Guest operating system must support Hyper-V’s technique for hot add/remove of memory
  • Virtual machine must not have Dynamic Memory enabled
  • Virtual machine configuration version must be a 2016 level
  • You cannot remove memory below 1 gigabyte (1000MB in the tools, not 1024)
  • You cannot remove memory that the guest won’t let go of
  • You cannot add memory beyond host availability or capacity

Virtual machines can be Generation 1 or Generation 2.

Guest Operating System Support for Hyper-V Hot Add/Remove Memory

Hot add/remove of memory is not a singular technology but an umbrella term for different techniques. I freely admit that I don’t understand them all. What matters is that not every operating system supports every method.

These operating systems will work with Hyper-V 2016’s hot add/remove of memory:

  • Windows 10 or later (I only tested with Enterprise and Pro)
  • Windows Server 2016 or later
  • Some Linux guests… I want to be more specific, but I don’t know exactly how to tell, either

For Linux, check the official documentation on supported guests (that’s the home page, the distribution list links are at the left and bottom). Look for Runtime Memory Resize. I believe that’s the correct entry. However, some of the notes attached to it also reference Dynamic Memory.

Hyper-V Dynamic Memory is not Compatible with Hot/Add Remove Fixed Memory

Dynamic Memory and hot add/remove of memory do not work together. If Dynamic Memory is enabled for a virtual machine, this feature is disabled.

Dynamic Memory does employ a memory hot-add technique, but it is not the same. Dynamic Memory does not use hot remove at all.

I do not know how this affects NUMA distribution. NUMA is not propagated into a guest with Dynamic Memory enabled — or at least, it wasn’t on 2012 R2. Fixed memory VMs do get NUMA propagation. When you hot add or remove memory from a Windows VM, it behaves differently than with Dynamic Memory. My assumption then, is that it handles NUMA propagation without any problems. Linux, however, behaves similarly to Dynamic Memory. So, I don’t know the story there.

Virtual Machine Configuration Version Support for Hyper-V Hot Add/Remove Memory

Hyper-V 2016 has been updated a few times since release. Sometimes, that included an update to the virtual machine configuration version. If you create a virtual machine with a current patch level of Hyper-V 2016, you’ll get at least a version 8 VM. 2016 initially released with the hot add/remove memory feature, so any virtual machine created by 2016 should support it. My oldest virtual machine is at version 7 and it works with hot add/remove.

You can see the configuration version of a virtual machine in Hyper-V Manager:

harm_cvcolumn

You can also use PowerShell. The Microsoft.HyperV.PowerShell.VirtualMachine object (the output from Get-VM) exposes a Version property. As of 2.0 of the Hyper-V PowerShell module, the default formatting of Get-VM’s output includes a Version column. For any version of Get-VM, you can directly ask for the version:

harm_pscvver

If you use Live Migration or import/export to bring a virtual machine over from 2012 R2, it will be at version 5.0. Hot add/remove of memory will not work with that version.

How to Add or Remove Memory from a Running Hyper-V Guest with PowerShell

Probably the best part of this feature is that you don’t need to learn any new tricks. The StartupMemory value continues to represent the memory setting for fixed memory VMs. So, just use Set-VMMemory with the StartupBytes parameter:

If you don’t get any output, the command worked.

How to Add or Remove Memory from a Running Hyper-V Guest with Hyper-V Manager

To change the memory allocation of a running virtual machine using Hyper-V Manager, access its Memory tab on its Settings dialog. Then, just set the value of RAM as desired:

harm_hvmsetmem

Choosing Between Dynamic Memory and Hyper-V Hot Add/Remove Memory

Personally, I feel like this feature primarily exists to check more boxes on the feature comparison lists against other hypervisors. I’ve run into a few single-issue voters and some “Microsoft is always wrong” types that decided that the absence of this feature was a show-stopper for Hyper-V. Most of the world just didn’t care. Provision your VMs wisely and you’ll never use this feature.

From that, I’d say to just decide whether or not you want to use Dynamic Memory. My default recommendation has always been to use Dynamic Memory, and that has not changed. Reasons:

  • Dynamic Memory works for far more guest operating systems, including most Linux distributions
  • You can reduce the minimum and raise the maximum for a running virtual machine using Dynamic Memory (you need to turn it off to change the startup, raise the minimum, or lower the maximum)
  • Dynamic Memory helps reduce wasteful allocation

Of course, Dynamic Memory isn’t always the best solution. In those cases, choose fixed. Then, if you ever need it, the hot add/remove memory feature will be there (pursuant to all of those other requirements, of course).

Troubleshooting Hyper-V Hot Add/Remove Memory

Hyper-V Manager often shows a non-descript “Failed to modify device ‘Memory’” error:

harm_hvmblah

It does have some clear messages, and those work well. For instance, if you try to assign more memory than the host has, and the guest operating system supports hot add/remove, you’ll be told the exact maximum amount that you can use. If the guest operating system doesn’t support the feature, then you will get the same non-descript error. Sometimes.

Sometimes, it tells you outright that the guest operating system doesn’t support it:

harm_hvmbadguest

PowerShell often has more helpful messaging. For instance, let’s say that you just created a virtual machine and as its booting, you realize that you didn’t give it enough memory. So, you change it right away. Unfortunately, the guest hasn’t loaded up all the way and can’t accept the change. Hyper-V Manager will just give you the useless message shown above. PowerShell, on the other hand, outright says: “Please make sure the guest operating system has fully booted, and try again.” That can come in handy when you have a virtual machine that is applying patches but locks its login screen so that you can’t tell what’s going on.

harm_psnotbooted

Note: If you’re automating, you might like to know when a guest can be considered “fully booted”. The answer: no one really knows. Hopefully, changing the memory of a running virtual machine is not a thing that people ever feel the need to automate. I suppose if it’s something that you need to do, you can just loop the cmdlet until it succeeds or throws a different error.

PowerShell will also helpfully remind you that you can’t use this feature alongside Dynamic Memory. Hyper-V Manager doesn’t need to do that since it locks the field:

harm_psdmon

So, what do you do if you get a non-descript error? Well, you power cycle the guest and try again. Did you say, “But doesn’t that defeat the purpose!?” Why, yes. Yes, it does. However, I was thoroughly convinced that none of my Linux distributions worked with this feature. I was tinkering with other items and testing minimums and maximums, so I shut down all of my Linux guests. To get some screenshots for this article, I tried again with those Linux guests. Every single one of them worked after that!

What You See in the Guest

I took a Windows Server 2016 VM with a startup setting of 1GB. I powered it on. I increased its startup memory to 4GB, then dropped it back down to 1GB. This is what Task Manager looked like:

harm_downupdown

At first glance, it appears that Task Manager sees a 4GB maximum. Look closer, though. Only the chart has a 4.0GB number. Everywhere else correctly shows the 1.0GB maximum. With Dynamic Memory ballooned from 4GB down to 1GB, most of the items would still show a 4GB maximum. See my Dynamic Memory article for some screenshots and explanations.

On Linux, both top and free continue to show the highest amount of memory that I allocated to the system. They did not reduce their maximums to match a reduction in memory. Instead, they showed a higher amount of used memory. That’s also how Dynamic Memory works, so I’m guessing that Microsoft uses the balloon driver for this feature in Linux guests.

95 Best Practices for Optimizing Hyper-V Performance

We can never get enough performance. Everything needs to be faster, faster, faster! You can find any number of articles about improving Hyper-V performance and best practices, of course, unfortunately, a lot of the information contains errors, FUD, and misconceptions. Some are just plain dated. Technology has changed and experience is continually teaching us new insights. From that, we can build a list of best practices that will help you to tune your system to provide maximum performance.

How to boost hyper-V performance - 95 best practices

Philosophies Used in this Article

This article focuses primarily on performance. It may deviate from other advice that I’ve given in other contexts. A system designed with performance in mind will be built differently from a system with different goals. For instance, a system that tries to provide high capacity at a low price point would have a slower performance profile than some alternatives.

  • Subject matter scoped to the 2012 R2 and 2016 product versions.
  • I want to stay on target by listing the best practices with fairly minimal exposition. I’ll expand ideas where I feel the need; you can always ask questions in the comments section.
  • I am not trying to duplicate pure physical performance in a virtualized environment. That’s a wasted effort.
  • I have already written an article on best practices for balanced systems. It’s a bit older, but I don’t see anything in it that requires immediate attention. It was written for the administrator who wants reasonable performance but also wants to stay under budget.
  • This content targets datacenter builds. Client Hyper-V will follow the same general concepts with variable applicability.

General Host Architecture

If you’re lucky enough to be starting in the research phase — meaning, you don’t already have an environment — then you have the most opportunity to build things properly. Making good purchase decisions pays more dividends than patching up something that you’ve already got.

  1. Do not go in blind.
    • Microsoft Assessment and Planning Toolkit will help you size your environment: MAP Toolkit
    • Ask your software vendors for their guidelines for virtualization on Hyper-V.
    • Ask people that use the same product(s) if they have virtualized on Hyper-V.
  2. Stick with logo-compliant hardware. Check the official list: https://www.windowsservercatalog.com/
  3. Most people will run out of memory first, disk second, CPU third, and network last. Purchase accordingly.
  4. Prefer newer CPUs, but think hard before going with bleeding edge. You may need to improve performance by scaling out. Live Migration requires physical CPUs to be the same or you’ll need to enable CPU compatibility mode. If your environment starts with recent CPUs, then you’ll have the longest amount of time to be able to extend it. However, CPUs commonly undergo at least one revision, and that might be enough to require compatibility mode. Attaining maximum performance may reduce virtual machine mobility.
  5. Set a target density level, e.g. “25 virtual machines per host”. While it may be obvious that higher densities result in lower performance, finding the cut-off line for “acceptable” will be difficult. However, having a target VM number in mind before you start can make the challenge less nebulous.
  6. Read the rest of this article before you do anything.

Management Operating System

Before we carry on, I just wanted to make sure to mention that Hyper-V is a type 1 hypervisor, meaning that it runs right on the hardware. You can’t “touch” Hyper-V because it has no direct interface. Instead, you install a management operating system and use that to work with Hyper-V. You have three choices:

Note: Nano Server initially offered Hyper-V, but that functionality will be removed (or has already been removed, depending on when you read this). Most people ignore the fine print of using Nano Server, so I never recommended it anyway.

TL;DR: In absence of a blocking condition, choose Hyper-V Server. A solid blocking condition would be the Automatic Virtual Machine Activation feature of Datacenter Edition. In such cases, the next preferable choice is Windows Server in Core mode.

I organized those in order by distribution size. Volumes have been written about the “attack surface” and patching. Most of that material makes me roll my eyes. No matter what you think of all that, none of it has any meaningful impact on performance. For performance, concern yourself with the differences in CPU and memory footprint. The widest CPU/memory gap lies between Windows Server and Windows Server Core. When logged off, the Windows Server GUI does not consume many resources, but it does consume some. The space between Windows Server Core and Hyper-V Server is much tighter, especially when the same features/roles are enabled.

One difference between Core and Hyper-V Server is the licensing mechanism. On Datacenter Edition, that does include the benefit of Automatic Virtual Machine Activation (AVMA). That only applies to the technological wiring. Do not confuse it with the oft-repeated myth that installing Windows Server grants guest licensing privileges. The legal portion of licensing stands apart; read our eBook for starting information.

Because you do not need to pay for the license for Hyper-V Server, it grants one capability that Windows Server does not: you can upgrade at any time. That allows you to completely decouple the life cycle of your hosts from your guests. Such detachment is a hallmark of the modern cloud era.

If you will be running only open source operating systems, Hyper-V Server is the natural choice. You don’t need to pay any licensing fees to Microsoft at all with that usage. I don’t realistically expect any pure Linux shops to introduce a Microsoft environment, but Linux-on-Hyper-V is a fantastic solution in a mixed-platform environment. And with that, let’s get back onto the list.

Management Operating System Best Practices for Performance

  1. Prefer Hyper-V Server first, Windows Server Core second
  2. Do not install any software, feature, or role in the management operating system that does not directly aid the virtual machines or the management operating system. Hyper-V prioritizes applications in the management operating system over virtual machines. That’s because it trusts you; if you are running something in the management OS, it assumes that you really need it.
  3. Do not log on to the management operating system. Install the management tools on your workstation and manipulate Hyper-V remotely.
  4. If you must log on to the management operating system, log off as soon as you’re done.
  5. Do not browse the Internet from the management operating system. Don’t browse from any server, really.
  6. Stay current on mainstream patches.
  7. Stay reasonably current on driver versions. I know that many of my peers like to install drivers almost immediately upon release, but I can’t join that camp. While it’s not entirely unheard of for a driver update to bring performance improvements, it’s not common. With all of the acquisitions and corporate consolidations going on in the hardware space — especially networking — I feel that the competitive drive to produce quality hardware and drivers has entered a period of decline. In simple terms, view new drivers as a potential risk to stability, performance, and security.
  8. Join your hosts to the domain. Systems consume less of your time if they answer to a central authority.
  9. Use antivirus and intrusion prevention. As long you choose your anti-malware vendor well and the proper exclusions are in place, performance will not be negatively impacted. Compare that to the performance of a compromised system.
  10. Read through our article on host performance tuning.

Leverage Containers

In the “traditional” virtualization model, we stand up multiple virtual machines running individual operating system environments. As “virtual machine sprawl” sets in, we wind up with a great deal of duplication. In the past, we could justify that as a separation of the environment. Furthermore, some Windows Server patches caused problems for some software but not others. In the modern era, containers and omnibus patch packages have upset that equation.

Instead of building virtual machine after virtual machine, you can build a few virtual machines. Deploy containers within them. Strategies for this approach exceed the parameters of this article, but you’re aiming to reduce the number of disparate complete operating system environments deployed. With careful planning, you can reduce density while maintaining a high degree of separation for your services. Fewer kernels are loaded, fewer context switches occur, less memory contains the same code bits, fewer disk seeks to retrieve essentially the same information from different locations.

  1. Prefer containers over virtual machines where possible.

CPU

You can’t do a great deal to tune CPU performance in Hyper-V. Overall, I count that among my list of “good things”; Microsoft did the hard work for you.

  1. Follow our article on host tuning; pay special attention to C States and the performance power settings.
  2. For Intel chips, leave hyperthreading on unless you have a defined reason to turn it off.
  3. Leave NUMA enabled in hardware. On your VMs’ property sheet, you’ll find a Use Hardware Topology button. Remember to use that any time that you adjust the number of vCPUs assigned to a virtual machine or move it to a host that has a different memory layout (physical core count and/or different memory distribution).
    best pratices for optimizing hyper-v performance - settings NUMA configuration
  4. Decide whether or not to allow guests to span NUMA nodes (the global host NUMA Spanning setting). If you size your VMs to stay within a NUMA node and you are careful to not assign more guests than can fit solidly within each NUMA node, then you can increase individual VM performance. However, if the host has trouble locking VMs into nodes, then you can negatively impact overall memory performance. If you’re not sure, just leave NUMA at defaults and tinker later.
  5. For modern guests, I recommend that you use at least two virtual CPUs per virtual machine. Use more in accordance with the virtual machine’s performance profile or vendor specifications. This is my own personal recommendation; I can visibly detect the response difference between a single vCPU guest and a dual vCPU guest.
  6. For legacy Windows guests (Windows XP/Windows Server 2003 and earlier), use 1 vCPU. More will likely hurt performance more than help.
  7. Do not grant more than 2 vCPU to a virtual machine without just cause. Hyper-V will do a better job reducing context switches and managing memory access if it doesn’t need to try to do too much core juggling. I’d make exceptions for very low-density hosts where 2 vCPU per guest might leave unused cores. At the other side, if you’re assigning 24 cores to every VM just because you can, then you will hurt performance.
  8. If you are preventing VMs from spanning NUMA nodes, do not assign more vCPU to a VM than you have matching physical cores in a NUMA node (usually means the number of cores per physical socket, but check with your hardware manufacturer).
  9. Use Hyper-V’s priority, weight, and reservation settings with great care. CPU bottlenecks are highly uncommon; look elsewhere first. A poor reservation will cause more problems than it solves.

Memory

I’ve long believed that every person that wants to be a systems administrator should be forced to become conversant in x86 assembly language, or at least C. I can usually spot people that have no familiarity with programming in such low-level languages because they almost invariably carry a bizarre mental picture of how computer memory works. Fortunately, modern memory is very, very, very fast. Even better, the programmers of modern operating system memory managers have gotten very good at their craft. Trying to tune memory as a systems administrator rarely pays dividends. However, we can establish some best practices for memory in Hyper-V.

  1. Follow our article on host tuning. Most importantly, if you have multiple CPUs, install your memory such that it uses multi-channel and provides an even amount of memory to each NUMA node.
  2. Be mindful of operating system driver quality. Windows drivers differ from applications in that they can permanently remove memory from the available pool. If they do not properly manage that memory, then you’re headed for some serious problems.
  3. Do not make your CSV cache too large.
  4. For virtual machines that will perform high quantities of memory operations, avoid dynamic memory. Dynamic memory disables NUMA (out of necessity). How do you know what constitutes a “high volume”? Without performance monitoring, you don’t.
  5. Set your fixed memory VMs to a higher priority and a shorter startup delay than your Dynamic Memory VMs. This ensures that they will start first, allowing Hyper-V to plot an optimal NUMA layout and reduce memory fragmentation. It doesn’t help a lot in a cluster, unfortunately. However, even in the best case, this technique won’t yield many benefits.
  6. Do not use more memory for a virtual machine than you can prove that it needs. Especially try to avoid using more memory than will fit in a single NUMA node.
  7. Use Dynamic Memory for virtual machines that do not require the absolute fastest memory performance.
  8. For Dynamic Memory virtual machines, pay the most attention to the startup value. It sets the tone for how the virtual machine will be treated during runtime. For virtual machines running full GUI Windows Server, I tend to use a startup of either 1 GB or 2 GB, depending on the version and what else is installed.
  9. For Dynamic Memory VMs, set the minimum to the operating system vendor’s stated minimum (512 MB for Windows Server). If the VM hosts a critical application, add to the minimum to ensure that it doesn’t get choked out.
  10. For Dynamic Memory VMs, set the maximum to a reasonable amount. You’ll generally discover that amount through trial and error and performance monitoring. Do not set it to an arbitrarily high number. Remember that, even on 2012 R2, you can raise the maximum at any time.

Check the CPU section for NUMA guidance.

Networking

In the time that I’ve been helping people with Hyper-V, I don’t believe that I’ve seen anyone waste more time worrying about anything that’s less of an issue than networking. People will read whitepapers and forums and blog articles and novels and work all weekend to draw up intricately designed networking layouts that need eight pages of documentation. But, they won’t spend fifteen minutes setting up a network utilization monitor. I occasionally catch grief for using MRTG since it’s old and there are shinier, bigger, bolder tools, but MRTG is easy and quick to set up. You should know how much traffic your network pushes. That knowledge can guide you better than any abstract knowledge or feature list.

That said, we do have many best practices for networking performance in Hyper-V.

  1. Follow our article on host tuning. Especially pay attention to VMQ on gigabit and separation of storage traffic.
  2. If you need your network to go faster, use faster adapters and switches. A big team of gigabit won’t keep up with a single 10 gigabit port.
  3. Use a single virtual switch per host. Multiple virtual switches add processing overhead. Usually, you can get a single switch to do whatever you wanted multiple switches to do.
  4. Prefer a single large team over multiple small teams. This practice can also help you to avoid needless virtual switches.
  5. For gigabit, anything over 4 physical ports probably won’t yield meaningful returns. I would use 6 at the outside. If you’re using iSCSI or SMB, then two more physical adapters just for that would be acceptable.
  6. For 10GbE, anything over 2 physical ports probably won’t yield meaningful returns.
  7. If you have 2 10GbE and a bunch of gigabit ports in the same host, just ignore the gigabit. Maybe use it for iSCSI or SMB, if it’s adequate for your storage platform.
  8. Make certain that you understand how the Hyper-V virtual switch functions. Most important:
    • You cannot “see” the virtual switch in the management OS except with Hyper-V specific tools. It has no IP address and no presence in the Network and Sharing Center applet.
    • Anything that appears in Network and Sharing Center that you think belongs to the virtual switch is actually a virtual network adapter.
    • Layer 3 (IP) information in the host has no bearing on guests — unless you create an IP collision
  9. Do not create a virtual network adapter in the management operating system for the virtual machines. I did that before I understood the Hyper-V virtual switch, and I have encountered lots of other people that have done it. The virtual machines will use the virtual switch directly.
  10. Do not multi-home the host unless you know exactly what you are doing. Valid reasons to multi-home:
    • iSCSI/SMB adapters
    • Separate adapters for cluster roles. e.g. “Management”, “Live Migration”, and “Cluster Communications”
  11. If you multi-home the host, give only one adapter a default gateway. If other adapters must use gateways, use the old route command or the new New-NetRoute command.
  12. Do not try to use internal or private virtual switches for performance. The external virtual switch is equally fast. Internal and private switches are for isolation only.
  13. If all of your hardware supports it, enable jumbo frames. Ensure that you perform validation testing (i.e.:
    ping storage-ip -f -l 8000)
  14. Pay attention to IP addressing. If traffic needs to locate an external router to reach another virtual adapter on the same host, then traffic will traverse the physical network.
  15. Use networking QoS if you have identified a problem.
    • Use datacenter bridging, if your hardware supports it.
    • Prefer the Weight QoS mode for the Hyper-V switch, especially when teaming.
    • To minimize the negative side effects of QoS, rely on limiting the maximums of misbehaving or non-critical VMs over trying to guarantee minimums for vital VMs.
  16. If you have SR-IOV-capable physical NICs, it provides the best performance. However, you can’t use the traditional Windows team for the physical NICs. Also, you can’t use VMQ and SR-IOV at the same time.
  17. Switch-embedded teaming (2016) allows you to use SR-IOV. Standard teaming does not.
  18. If using VMQ, configure the processor sets correctly.
  19. When teaming, prefer Switch Independent mode with the Dynamic load balancing algorithm. I have done some performance testing on the types (near the end of the linked article). However, a reader commented on another article that the Dynamic/Switch Independent combination can cause some problems for third-party load balancers (see comments section).

Storage

When you need to make real differences in Hyper-V’s performance, focus on storage. Storage is slow. The best way to make storage not be slow is to spend money. But, we have other ways.

  1. Follow our article on host tuning. Especially pay attention to:
    • Do not break up internal drive bays between Hyper-V and the guests. Use one big array.
    • Do not tune the Hyper-V partition for speed. After it boots, Hyper-V averages zero IOPS for itself. As a prime example, don’t put Hyper-V on SSD and the VMs on spinning disks. Do the opposite.
    • The best ways to get more storage speed is to use faster disks and bigger arrays. Almost everything else will only yield tiny differences.
  2. For VHD (not VHDX), use fixed disks for maximum performance. Dynamically-expanding VHD is marginally, but measurably, slower.
  3. For VHDX, use dynamically-expanding disks for everything except high-utilization databases. I receive many arguments on this, but I’ve done the performance tests and have years of real-world experience. You can trust that (and run the tests yourself), or you can trust theoretical whitepapers from people that make their living by overselling disk space but have perpetually misplaced their copy of diskspd.
  4. Avoid using shared VHDX (2012 R2) or VHDS (2016). Performance still isn’t there. Give this technology another maturation cycle or two and look at it again.
  5. Where possible, do not use multiple data partitions in a single VHD/X.
  6. When using Cluster Shared Volumes, try to use at least as many CSVs as you have nodes. Starting with 2012 R2, CSV ownership will be distributed evenly, theoretically improving overall access.
  7. You can theoretically improve storage performance by dividing virtual machines across separate storage locations. If you need to make your arrays span fewer disks in order to divide your VMs’ storage, you will have a net loss in performance. If you are creating multiple LUNs or partitions across the same disks to divide up VMs, you will have a net loss in performance.
  8. For RDS virtual machine-based VDI, use hardware-based or Windows’ Hyper-V-mode deduplication on the storage system. The read hits, especially with caching, yield positive performance benefits.
  9. The jury is still out on using host-level deduplication for Windows Server guests, but it is supported with 2016. I personally will be trying to place Server OS disks on SMB storage deduplicated in Hyper-V mode.
  10. The slowest component in a storage system is the disk(s); don’t spend a lot of time worrying about controllers beyond enabling caching.
  11. RAID-0 is the fastest RAID type, but provides no redundancy.
  12. RAID-10 is generally the fastest RAID type that provides redundancy.
  13. For Storage Spaces, three-way mirror is fastest (by a lot).
  14. For remote storage, prefer MPIO or SMB multichannel over multiple unteamed adapters. Avoid placing this traffic on teamed adapters.
  15. I’ve read some scattered notes that say that you should format with 64 kilobyte allocation units. I have never done this, mostly because I don’t think about it until it’s too late. If the default size hurts anything, I can’t tell. Someday, I’ll remember to try it and will update this article after I’ve gotten some performance traces. If you’ll be hosting a lot of SQL VMs and will be formatting their VHDX with 64kb AUs, then you might get more benefit.
  16. I still don’t think that ReFS is quite mature enough to replace NTFS for Hyper-V. For performance, I definitely stick with NTFS.
  17. Don’t do full defragmentation. It doesn’t help. The minimal defragmentation that Windows automatically performs is all that you need. If you have some crummy application that makes this statement false, then stop using that application or exile it to its own physical server. Defragmentation’s primary purpose is to wear down your hard drives so that you have to buy more hard drives sooner than necessary, which is why employees of hardware vendors recommend it all the time. If you have a personal neurosis that causes you pain when a disk becomes “too” fragmented, use Storage Live Migration to clear and then re-populate partitions/LUNs. It’s wasted time that you’ll never get back, but at least it’s faster. Note: All retorts must include verifiable and reproducible performance traces, or I’m just going to delete them.

Clustering

For real performance, don’t cluster virtual machines. Use fast internal or direct-attached SSDs. Cluster for redundancy, not performance. Use application-level redundancy techniques instead of relying on Hyper-V clustering.

In the modern cloud era, though, most software doesn’t have its own redundancy and host clustering is nearly a requirement. Follow these best practices:

  1. Validate your cluster. You may not need to fix every single warning, but be aware of them.
  2. Follow our article on host tuning. Especially pay attention to the bits on caching storage. It includes a link to enable CSV caching.
  3. Remember your initial density target. Add as many nodes as necessary to maintain that along with sufficient extra nodes for failure protection.
  4. Use the same hardware in each node. You can mix hardware, but CPU compatibility mode and mismatched NUMA nodes will have at least some impact on performance.
  5. For Hyper-V, every cluster node should use a minimum of two separate IP endpoints. Each IP must exist in a separate subnet. This practice allows the cluster to establish multiple simultaneous network streams for internode traffic.
    • One of the addresses must be designated as a “management” IP, meaning that it must have a valid default gateway and register in DNS. Inbound connections (such as your own RDP and PowerShell Remoting) will use that IP.
    • None of the non-management IPs should have a default gateway or register in DNS.
    • One alternative IP endpoint should be preferred for Live Migration. Cascade Live Migration preference order through the others, ending with the management IP. You can configure this setting most easily in Failover Cluster Manager by right-clicking on the Networks node.
    • Further IP endpoints can be used to provide additional pathways for cluster communications. Cluster communications include the heartbeat, cluster status and update messages, and Cluster Shared Volume information and Redirected Access traffic.
    • You can set any adapter to be excluded from cluster communications but included in Live Migration in order to enforce segregation. Doing so generally does not improve performance, but may be desirable in some cases.
    • You can use physical or virtual network adapters to host cluster IPs.
    • The IP for each cluster adapter must exist in a unique subnet on that host.
    • Each cluster node must contain an IP address in the same subnet as the IPs on other nodes. If a node does not contain an IP in a subnet that exists on other nodes, then that network will be considered “partitioned” and the node(s) without a member IP will be excluded from that network.
    • If the host will connect to storage via iSCSI, segregate iSCSI traffic onto its own IP(s). Exclude it/them from cluster communications and Live Migration. Because they don’t participate in cluster communications, it is not absolutely necessary that they be placed into separate subnets. However, doing so will provide some protection from network storms.
  6. If you do not have RDMA-capable physical adapters, Compression usually provides the best Live Migration performance.
  7. If you do have RDMA-capable physical adapters, SMB usually provides the best Live Migration performance.
  8. I don’t recommend spending time tinkering with the metric to shape CSV traffic anymore. It utilizes SMB, so the built-in SMB multi-channel technology can sort things out.

Virtual Machines

The preceding guidance obliquely covers several virtual machine configuration points (check the CPU and the memory sections). We have a few more:

  1. Don’t use Shielded VMs or BitLocker. The encryption and VMWP hardening incur overhead that will hurt performance. The hit is minimal — but this article is about performance.
  2. If you have 1) VMs with very high inbound networking needs, 2) physical NICs >= 10GbE, 3) VMQ enabled, 4) spare CPU cycles, then enable RSS within the guest operating systems. Do not enable RSS in the guest OS unless all of the preceding are true.
  3. Do not use the legacy network adapter in Generation 1 VMs any more than absolutely necessary.
  4. Utilize checkpoints rarely and briefly. Know the difference between standard and “production” checkpoints.
  5. Use time synchronization appropriately. Meaning, virtual domain controllers should not have the Hyper-V time synchronization service enabled, but all other VMs should (generally speaking). The hosts should pull their time from the domain hierarchy. If possible, the primary domain controller should be pulling from a secured time source.
  6. Keep Hyper-V guest services up-to-date. Supported Linux systems can be updated via kernel upgrades/updates from their distribution repositories. Windows 8.1+ and Windows Server 2012 R2+ will update from Windows Update.
  7. Don’t do full defragmentation in the guests, either. Seriously. We’re administering multi-spindle server equipment here, not displaying a progress bar to someone with a 5400-RPM laptop drive so that they feel like they’re accomplishing something.
  8. If the virtual machine’s primary purpose is to run an application that has its own replication technology, don’t use Hyper-V Replica. Examples: Active Directory and Microsoft SQL Server. Such applications will replicate themselves far more efficiently than Hyper-V Replica.
  9. If you’re using Hyper-V Replica, consider moving the VMs’ page files to their own virtual disk and excluding it from the replica job. If you have a small page file that doesn’t churn much, that might cost you more time and effort than you’ll recoup.
  10. If you’re using Hyper-V Replica, enable compression if you have spare CPU but leave it disabled if you have spare network. If you’re not sure, use compression.
  11. If you are shipping your Hyper-V Replica traffic across an encrypted VPN or keeping its traffic within secure networks, use Kerberos. SSL en/decryption requires CPU. Also, the asymmetrical nature of SSL encryption causes the encrypted data to be much larger than its decrypted source.

Monitoring

You must monitor your systems. Monitoring is not and has never been, an optional activity.

  1. Be aware of Hyper-V-specific counters. Many people try to use Task Manager in the management operating system to gauge guest CPU usage, but it just doesn’t work. The management operating system is a special-case virtual machine, which means that it is using virtual CPUs. Its Task Manager cannot see what the guests are doing.
  2. Performance Monitor has the most power of any built-in tool, but it’s tough to use. Look at something like Performance Analysis of Logs (PAL) tool, which understands Hyper-V.
  3. In addition to performance monitoring, employ state monitoring. With that, you no longer have to worry (as much) about surprise events like disk space or memory filling up. I like Nagios, as regular readers already know, but you can select from many packages.
  4. Take periodic performance baselines and compare them to earlier baselines

If you’re able to address a fair proportion of points from this list, I’m sure you’ll see a boost in Hyper-V performance. Don’t forget this list is not exhaustive and I’ll be adding to it periodically to ensure it’s as comprehensive as possible however if you think there’s something missing, let me know in the comments below and you may see the number 95 increase!

Container Images are now out for Windows Server version 1709!

With the release of Windows Server version 1709 also come Windows Server Core and Nano Server base OS container images.

It is important to note that while older versions of the base OS container images will work on a newer host (with Hyper-V isolation), the opposite is not true. Container images based on Windows Server version 1709 will not work on a host using Windows Server 2016.  Read more about the different versions of Windows Server.

We’ve also made some changes to our tagging scheme so you can more easily specify which version of the container images you want to use.  From now on, the “latest” tag will follow the releases of the current LTSC product, Windows Server 2016. If you want to keep up with the latest patches for Windows Server 2016, you can use:

“microsoft/nanoserver”
or
“microsoft/windowsservercore”

in your dockerfiles to get the most up-to-date version of the Windows Server 2016 base OS images. You can also continue using specific versions of the Windows Server 2016 base OS container images by using the tags specifying the build, like so:

“microsoft/nanoserver:10.0.14393.1770”
or
“microsoft/windowsservercore:10.0.14393.1770”.

If you would like to use base OS container images based on Windows Server version 1709, you will have to specify that with the tag. In order to get the most up-to-date base OS container images of Windows Server version 1709, you can use the tags:

“microsoft/nanoserver:1709”
or
“microsoft/windowsservercore:1709”

And if you would like a specific version of these base OS container images, you can specify the KB number that you need on the tag, like this:

“microsoft/nanoserver:1709_KB4043961”
or
“microsoft/windowsservercore:1709_KB4043961”.

We hope that this tagging scheme will ensure that you always choose the image that you want and need for your environment. Please let us know in the comments if you have any feedback for us.

Note: We currently do not intend to use the build numbers to specify Windows Server version 1709 container images. We will only be using the KB schema specified above for the tagging of these images. Let us know if you have feedback about this as well

Regards,
Ender

How to Perform Hyper-V Storage Migration

New servers? New SAN? Trying out hyper-convergence? Upgrading to Hyper-V 2016? Any number of conditions might prompt you to move your Hyper-V virtual machine’s storage to another location. Let’s look at the technologies that enable such moves.

An Overview of Hyper-V Migration Options

Hyper-V offers numerous migration options. Each has its own distinctive features. Unfortunately, we in the community often muck things up by using incorrect and confusing terminology. So, let’s briefly walk through the migration types that Hyper-V offers:

  • Quick migration: Cluster-based virtual machine migration that involves placing a virtual machine into a saved state, transferring ownership to another node in the same cluster, and resuming the virtual machine. A quick migration does not involve moving anything that most of us consider storage.
  • Live migration: Cluster-based virtual machine migration that involves transferring the active state of a running virtual machine to another node in the same cluster. A Live Migration does not involve moving anything that most of us consider storage.
  • Storage migration: Any technique that utilizes the Hyper-V management service to relocate any file-based component that belongs to a virtual machine. This article focuses on this migration type, so I won’t expand any of those thoughts in this list.
  • Shared Nothing Live Migration: Hyper-V migration technique between two hosts that does not involve clustering. It may or may not include a storage migration. The virtual machine might or might not be running. However, this migration type always includes ownership transfer from one host to another.

It Isn’t Called Storage Live Migration

I have always called this operation “Storage Live Migration”. I know lots of other authors call it “Storage Live Migration”. But, Microsoft does not call it “Storage Live Migration”. They just call it “Storage Migration”. The closest thing that I can find to “Storage Live Migration” in anything from Microsoft is a 2012 TechEd recording by Benjamin Armstrong. The title of that presentation includes the phrase “Live Storage Migration”, but I can’t determine if the “Live” just modifies “Storage Migration” or if Ben uses it as part of the technology name. I suppose I could listen to the entire hour and a half presentation, but I’m lazy. I’m sure that it’s a great presentation, if anyone wants to listen and report back.

Anyway, does it matter? I don’t really think so. I’m certainly not going to correct anyone that uses that phrase. However, the virtual machine does not necessarily need to be live. We use the same tools and commands to move a virtual machine’s storage whether it’s online or offline. So, “Storage Migration” will always be a correct term. “Storage Live Migration”, not so much. However, we use the term “Shared Nothing Live Migration” for virtual machines that are turned off, so we can’t claim any consistency.

What Can Be Moved with Hyper-V Storage Migration?

When we talk about virtual machine storage, most people think of the places where the guest operating system stores its data. That certainly comprises the physical bulk of virtual machine storage. However, it’s also only one bullet point on a list of multiple components that form a virtual machine.

Independently, you can move any of these virtual machine items:

  • The virtual machine’s core files (configuration in xml or .vmcx, .bin, .vsv, etc.)
  • The virtual machine’s checkpoints (essentially the same items as the preceding bullet point, but for the checkpoint(s) instead of the active virtual machine)
  • The virtual machine’s second-level paging file location. I have not tested to see if it will move a VM with active second-level paging files, but I have no reason to believe that it wouldn’t
  • Virtual hard disks attached to a virtual machine
  • ISO images attached to a virtual machine

We most commonly move all of these things together. Hyper-V doesn’t require that, though. Also, we can move all of these things in the same operation but distribute them to different destinations.

What Can’t Be Moved with Hyper-V Storage Migration?

In terms of storage, we can move everything related to a virtual machine. But, we can’t move the VM’s active, running state with Storage Migration. Storage Migration is commonly partnered with a Live Migration in the operation that we call “Shared Nothing Live Migration”. To avoid getting bogged down in implementation details that are more academic than practical, just understand one thing: when you pick the option to move the virtual machine’s storage, you are not changing which Hyper-V host owns and runs the virtual machine.

More importantly, you can’t use any Microsoft tool-based technique to separate a differencing disk from its parent. So, if you have an AVHDX (differencing disk created by the checkpointing mechanism) and you want to move it away from its source VHDX, Storage Migration will not do it. If you instruct Storage Migration to move the AVHDX, the entire disk chain goes along for the ride.

Uses for Hyper-V Storage Migration

Out of all the migration types, storage migration has the most applications and special conditions. For instance, Storage Migration is the only Hyper-V migration type that does not always require domain membership. Granted, the one exception to the domain membership rule won’t be very satisfying for people that insist on leaving their Hyper-V hosts in insecure workgroup mode, but I’m not here to please those people. I’m here to talk about the nuances of Storage Migration.

Local Relocation

Let’s start with the simplest usage: relocation of local VM storage. Some situations in this category:

  • You left VMs in the default “C:ProgramDataMicrosoftWindowsHyper-V” and/or “C:UsersPublicDocumentsHyper-VVirtual Hard Disks” locations and you don’t like it
  • You added new internal storage as a separate volume and want to re-distribute your VMs
  • You have storage speed tiers but no active management layer
  • You don’t like the way your VMs’ files are laid out
  • You want to defragment VM storage space. It’s a waste of time, but it works.

Network Relocation

With so many ways to do network storage, it’s nearly a given that we’ll all need to move a VHDX across ours at some point. Some situations:

  • You’re migrating from local storage to network storage
  • You’re replacing a SAN or NAS and need to relocate your VMs
  • You’ve expanded your network storage and want to redistribute your VMs

Most of the reasons listed under “Local Relocation” can also apply to network relocation.

Cluster Relocation

We can’t always build our clusters perfectly from the beginning. For the most part, a cluster’s relocation needs list will look like the local and network lists above. A few others:

  • Your cluster has new Cluster Shared Volumes that you want to expand into
  • Existing Cluster Shared Volumes do not have a data distribution that does not balance well. Remember that data access from a CSV owner node is slightly faster than from a non-owner node

The reasons matter less than the tools when you’re talking about clusters. You can’t use the same tools and techniques to move virtual machines that are protected by Failover Clustering under Hyper-V as you use for non-clustered VMs.

Turning the VM Off Makes a Difference for Storage Migration

You can perform a very simple experiment: perform a Storage Migration for a virtual machine while it’s on, then turn it off and migrate it back. The virtual machine will move much more quickly while it’s off. This behavior can be explained in one word: synchronization.

When the virtual machine is off, a Storage Migration is essentially a monitored file copy. The ability of the constituent parts to move bits from source to destination sets the pace of the move. When the virtual machine is on, all of the rules change. The migration is subjected to these constraints:

  • The virtual machine’s operating system must remain responsive
  • Writes must be properly captured
  • Reads must occur from the most appropriate source

Even if the guest operating does not experience much activity during the move, that condition cannot be taken as a constant. In other words, Hyper-V needs to be ready for it to start demanding lots of I/O at any time.

So, the Storage Migration of a running virtual machine will always take longer than the Storage Migration of a virtual machine in an off or saved state. You can choose the convenience of an online migration or the speed of an offline migration.

Note: You can usually change a virtual machine’s power state during a Storage Migration. It’s less likely to work if you are moving across hosts.

How to Perform Hyper-V Storage Migration with PowerShell

The nice thing about using PowerShell for Storage Migration: it works for all Storage Migration types. The bad thing about using PowerShell for Storage Migration: it can be difficult to get all of the pieces right.

The primary cmdlet to use is Move-VMStorage. If you will be performing a Shared Nothing Live Migration, you can also use Move-VM. The parts of Move-VM that pertain to storage match Move-VMStorage. Move-VM has uses, requirements, and limitations that don’t pertain to the topic of this article, so I won’t cover Move-VM here.

A Basic Storage Migration in PowerShell

Let’s start with an easy one. Use this when you just want all of a VM’s files to be in one place:

This will move the virtual machine named testvm so that all of its components reside under the C:LocalVMs folder. That means:

  • The configuration files will be placed in C:LocalVMsVirtual Machines
  • The checkpoint files will be placed in C:LocalVMsSnapshots
  • The VHDXs will be placed in C:LocalVMsVirtual Hard Disks
  • Depending on your version, an UndoLog Configuration folder will be created if it doesn’t already exist. The folder is meant to contain Hyper-V Replica files. It may be created even for virtual machines that aren’t being replicated.

Complex Storage Migrations in PowerShell

For more complicated move scenarios, you won’t use the DestinationStoragePath parameter. You’ll use one or more of the individual component parameters. Choose from the following:

  • VirtualMachinePath: Where to place the VM’s configuration files.
  • SnapshotFilePath: Where to place the VM’s checkpoint files (again, NOT the AVHDXs!)
  • SmartPagingFilePath: Where to place the VM’s smart paging files
  • Vhds: An array of hash tables that indicate where to place individual VHD/X files.

Some notes on these items:

  • You are not required to use all of these parameters. If you do not specify a parameter, then its related component is left alone. Meaning, it doesn’t get moved at all.
  • If you’re trying to use this to get away from those auto-created Virtual Machines and Snapshots folders, it doesn’t work. They’ll always be created as sub-folders of whatever you type in.
  • It doesn’t auto-create a Virtual Hard Disks folder.
  • If you were curious whether or not you needed to specify those auto-created subfolders, the answer is: no. Move-VMStorage will always create them for you (unless they already exist).
  • The VHDs hash table is the hardest part of this whole thing. I’m usually a PowerShell-first kind of guy, but even I tend to go to the GUI for Storage Migrations.

The following will move all components except VHDs, which I’ll tackle in the next section:

Move-VMStorage’s Array of Hash Tables for VHDs

The three …FilePath parameters are easy: just specify the path. The Vhds parameter is tougher. It is one or more hash tables inside an array.

First, the hash tables. A hash table is a custom object that looks like an array, but each entry has a unique name. The hash tables that Vhds expects have a SourceFilePath entry and a DestinationFilePath entry. Each must be fully-qualified for a file. A hash table is contained like this: @{ }. The name of an entry and its value are joined with an =. Entries are separated by a ; So, if you want to move the VHDX named svtest.vhdx from \svstoreVMs to C:LocalVMstestvm, you’d use this hash table:

Reading that, you might ask (quite logically): “Can I change the name of the VHDX file when I move it?” The answer: No, you cannot. So, why then do you need to enter the full name of the destination file? I don’t know!

Next, the arrays. An array is bounded by @( ). Its entries are separated by commas. So, to move two VHDXs, you would do something like this:

I broke that onto multiple lines for legibility. You can enter it all on one line. Note where I used parenthesis and where I used curly braces.

Tip: To move a single VHDX file, you don’t need to do the entire array notation. You can use the first example with Vhds.

A Practical Move-VMStorage Example with Vhds

If you’re looking at all that and wondering why you’d ever use PowerShell for such a thing, I have the perfect answer: scripting. Don’t do this by hand. Use it to move lots of VMs in one fell swoop. If you want to see a plain example of the Vhds parameter in action, the Get-Help examples show one. I’ve got a more practical script in mind.

The following would move all VMs on the host. All of their config, checkpoint, and second-level paging files will be placed on a share named “\vmstoreslowstorage”. All of their VHDXs will be placed on a share named “\vmstorefaststorage”. We will have PowerShell deal with the source paths and file names.

I used splatting for the parameters for two reasons: 1, legibility. 2, to handle VMs without any virtual hard disks.

How to Perform Hyper-V Storage Migration with Hyper-V Manager

Hyper-V Manager can only be used for non-clustered virtual machines. It utilizes a wizard format. To use it to move a virtual machine’s storage:

  1. Right-click on the virtual machine and click Move.
  2. Click Next on the introductory page.
  3. Change the selection to Move the virtual machine’s storage (the same storage options would be available if you moved the VM’s ownership, but that’s not part of this article)
    movevm_hvmwiz1
  4. Choose how to perform the move. You can move everything to the same location, you can move everything to different locations, or you can move only the virtual hard disks.
    movevm_hvmwiz2
  5. What screens you see next will depend on what you chose. We’ll cover each branch.

If you opt to move everything to one location, the wizard will show you this simple page:

movevm_hvmwiz3

If you choose the option to Move the virtual machine’s data to different locations, you will first see this screen:

movevm_hvmwiz4

For every item that you check, you will be given a separate screen where you indicate the desired location for that item. The wizard uses the same screen for these items as it does for the hard-disks only option. I’ll show its screen shot next.

If you choose Move only the virtual machine’s virtual hard disks, then you will be given a sequence of screens where you instruct it where to move the files. These are the same screens used for the individual components from the previous selection:

movevm_hvmwiz5

After you make your selections, you’ll be shown a summary screen where you can click Finish to perform the move:

movevm_hvmwiz6

How to Perform Hyper-V Storage Migration with Failover Cluster Manager

Failover Cluster Manager uses a slick single-screen interface to move storage for cluster virtual machines. To access it, simply right-click a virtual machine, hover over Move, and click Virtual Machine Storage. You’ll see the following screen:

movecm_fcm1

If you just want to move the whole thing to one of the display Cluster Shared Volumes, just drag and drop it down to that CSV in the Cluster Storage heading at the lower left. You can drag and drop individual items or the entire VM. The Destination Folder Path will be populated accordingly.

As you can see in mine, I have all of the components except the VHD on an SMB share. I want to move the VHD to be with the rest. To get a share to show up, click the Add Share button. You’ll get this dialog:

movevm_fcmaddshare

The share will populate underneath the CSVs in the lower left. Now, I can drag and drop that file to the share. View the differences:

movecm_fcm2

Once you have the dialog the way that you like it, click Start.

3 Emerging Technologies that Will Change the Way You Use Hyper-V

Hello once again everyone!

The I.T. landscape changes incredibly quickly (if you know a faster changing industry, I’d love to know!) I.T. professionals need to know what’s coming round the corner to stay ahead of the game or risk being left behind. Well, we don’t want that to happen to you, so we’ve run down what we feel are the three most important emerging technologies that will drastically change the Hyper-V landscape.

  1. Continued Adoption of Public Cloud Platforms – It’s becoming clear that the public cloud is continuing to gain steam. It’s not just one vendor, but several, and it continues to pull workloads from on-premise to the cloud. Many people were keen to wait out this “cloud thing”, but it has become quite clear that it’s here to stay. Capabilities in online platforms such as Microsoft Azure and Amazon AWS, have increasingly made it easier, more cost-effective, and desirable to put workloads in the public cloud. These cloud platforms can often provide services that most customers don’t have available on-premise, and this paired with several other things that we’ll talk about in the webinar are leading to increased adoption of these platforms over on-premise installations.
  2. Azure Stack and the Complete Abstraction of Hyper-V under-the-hood – With some of the latest news and release information out of Microsoft regarding their new Microsoft Azure Stack (MAS), things have taken an interesting turn for Hyper-V. As on-premise administrators have always been used to having direct access to the hypervisor, they may be surprised to learn that Hyper-V is so far under the hood in MAS that you can’t even access it. That’s right. The Hypervisor has become so simplified and automated, that there is no need to directly access it in MAS, but this is primarily because MAS follows the same usage and management guidelines as Microsoft Azure. This will bother a lot of administrators but it’s becoming the world we live in. As such, we’ll be talking about this extensively during the webinar.
  3. Containers and Microservices and why they are a game-changer – Containers has become one of the new buzz-words in the industry. If you’re not aware, you can think of containers as similar to a VM, but fundamentally different. Whereas in a VM you’re virtualizing the OS, and everything on top of it, with containers you’re only virtualizing the application. Much of the underlying support functions are handled by the container host, as opposed to an OS built into a VM. For a long time it seemed that containers were going to primarily be a developer thing, but as the line between IT Pro and Dev continues to blur, Containers can no longer be ignored by IT Pros, and we’ll be talking about that revelation extensively during our panel discussion.

As you can see there is much to talk about, and many will be wondering how this affects them. You’re probably asking yourself questions like: “What new skills should IT Pros be learning to stay relevant?”, “Are hypervisors becoming irrelevant?”, “Will containers replace virtual machines?”, “Is the Cloud here to stay?”, “Is there still a place for Windows Server in the world?”, “What can I do now to stay relevant and what skills do I need to learn to future-proof my career?” Yep, these developments certainly raise a lot of issues which is why we decided to take this topic further.

Curious to know more? Join our Live Webinar!

As you know we love to put on webinars here at Altaro as we find them a critical tool for getting information about new technologies and features to our viewership. We’ve always stuck to the same basic educational format and it’s worked well over the years. However, we’ve always wanted to try something a bit different. There certainly isn’t anything wrong with an educational format, but with some topics, it’s often best to just have a conversation. This idea is at the core of our next webinar along with some critical changes that are occurring within our industry.

For the first time ever, Altaro will be putting on a panel-style webinar with not 1 or 2, but with 3 Microsoft Cloud and Datacenter MVPs. Andy Syrewicze, Didier Van Hoye, and Thomas Maurer will all be hosting this webinar as they talk about some major changes and take your questions/feedback regarding things that are occurring in the industry today. These are things that will affect the way you use and consume Hyper-V.

Webinar Registration