Images

Hyper-V Quick Tip: Safely Shutdown a Guest with Unresponsive Mouse

Q: A virtual machine running Windows under Hyper-V does not respond to mouse commands in VMConnect. How do I safely shut it down?

A: Use a combination of VMConnect’s key actions and native Windows key sequences to shut down.

Ordinarily, you would use one of Hyper-V’s various “Shut Down” commands to instruct a virtual machine to gracefully shut down the guest operating system. Otherwise, you can use the guest operating system’s native techniques for shutting down. In Windows guests running the full desktop experience, the mouse provides the easiest way. However, any failure of the guest operating system renders the mouse inoperable. The keyboard continues to work, of course.

Shutting Down a Windows Guest of Hyper-V Using the Keyboard

Your basic goal is to reach a place where you can issue the shutdown command.

Tip: Avoid using the mouse on the VMConnect window at all. It will bring up the prompt about the mouse each time unless you disable it. Clicking on VMConnect’s title bar will automatically set focus so that it will send most keypresses into the guest operating system. You cannot send system key combinations or anything involving the physical Windows key (otherwise, these directions would be a lot shorter).

  1. First, you need to log in. Windows 10/Windows Server 2016 and later no longer require any particular key sequence to bring up a login prompt — pressing any key while VMConnect has focus should show a login prompt. Windows 8.1/Windows Server 2012 R2 and earlier all require a CTRL+ALT+DEL sequence prior to making log in available. For those, click Action on VMConnect’s menu bar, then click Ctrl+Alt+Delete. If your VMConnect session is running locally, you can press the CTRL+ALT+END sequence on your physical keyboard instead. However, that won’t work within a remote desktop session.

    You can also press the related button on VMConnect’s button bar immediately below the text menu. It’s the button with three small boxes. In the screenshot above, look directly to the left of the highlighted text.
  2. Log in with valid credentials. Your virtual machine’s network likely does not work either, so you may need to use local credentials.
  3. Use the same sequences from step 1 to send a CTRL+ALT+DEL sequence to the guest.
  4. In the overlay that appears, use the physical down or up arrow key until Task Manager is selected, then press Enter. The screen will look different on versions prior to 10/2016 but will function the same.
  5. Task Manager should appear as the top-most window. If it does, proceed to step 6.
    If it does not, then you might be out of luck. If you can see enough of Task Manager to identify the window that obscures it, or if you’re just plain lucky, you can close the offending program. If you want, you can just proceed to step 6 and try to run these steps blindly.
    1. Press the TAB key. That will cause Task Manager to switch focus to its processes list.
    2. Press the up or down physical arrow keys to cycle through the running processes.
    3. Press Del to close a process.
  6. Press ALT+F to bring up Task Manager’s file menu. Press Enter or N for Run new task (wording is different on earlier versions of Windows).
  7. In the Create new task dialog, type 
    shutdown /s /t 0. If your display does not distinguish, that’s a zero at the end, not the letter O. Shutting down from within the console typically does not require administrative access, but if you’d like, you can press Tab to set focus to the Create this task with administrative privileges box and then press the Spacebar to check it. Press Enter to run the command (or Tab to the OK button and press Enter).

Once you’ve reached step 7, you have other options. You can enter cmd to bring up a command prompt or powershell for a PowerShell prompt. If you want to tinker with different options for the shutdown command, you can do that as well. If you would like to get into Device Manager to see if you can sort out whatever ails the integration services, run devmgmt.msc (use the administrative privileges checkbox for best results).

Be aware that this generally will not fix anything. Whatever prevented the integration services from running will likely continue. However, your guest won’t suffer any data loss. So, you could connect its VHDX file(s) to a healthy virtual machine for troubleshooting. Or, if the problem is environmental, you can safely relocate the virtual machine to another host.

More Hyper-V Quick Tips

How Many Cluster Networks Should I Use?

How to Choose a Live Migration Performance Solution

How to Enable Nested Virtualization

Have you run into this issue yourself? Were you able to navigate around it? What was your solution? Let us know in the comments section below!

The Latest Updates from Windows Server 2019 Development

**Webinar Announcement**

Want to know all about the full version of Windows Server 2019 upon its release? Join our upcoming free webinar What’s New in Windows Server 2019 on October 3rd to learn from the experts about all the new updates as well as a closer look at the standout features.

Windows Server 2019 free webinar from Altaro

The webinar will be presented live twice on the day at 2pm CEST/8am EDT/5am PDT and at 7pm CEST/1pm EDT/10am PDT to cater for our audiences on both sides of the pond. The content will be the same so feel free to join whichever time slot suits you best. Both webinars are live so bring your questions to ask our experts anything you want to know about Windows Server 2019!

Save my Seat for the webinar!

The Current State of Affairs – Windows Server 2019 Updates

Microsoft’s Windows Server teams have been hard at work on preparing 2019 for release. They’ve already given us several new features over the past few months. As we approach the unveiling of the final product, the preview release cadence accelerates as long-term projects begin to wrap up. Where we previously covered one build per article, I now have three separate builds to report on (17733, 17738, and 17744). We’ve got a raft of new features as well as multiple refinements geared toward a polished end product.

The Final Stages

If you haven’t yet started trying out Windows Server 2019 previews but have been thinking about it, I don’t think there will be a better time. There’s always a chance that some new major feature has yet to be announced, but there are more than enough now to keep you busy for a while. If you’re thinking about becoming an expert on the product for employability or sales purposes, if you’re planning to release software for the new platform, or if you intend to adopt Windows Server 2019 early on, this is the time to get into the program. Get your feedback and bug reports into the system now while it’s still relatively easy to incorporate.

Microsoft has been asking for two things all along that have increased in importance:

  • In-place upgrades: Wouldn’t you like to just hop to the next version of Windows Server without going through a painful data migration? If so, try it out on a test system. Microsoft has done a great deal of work trying to make Windows Server 2019 upgrade friendly. I’ve had some mixed experiences with it so far. They can only make it better if we tell them where it fails.
  • Application compatibility: I would prioritize application compatibility testing, especially for software developers and administrators responsible for irreplaceable line-of-business applications. Windows Server 2019 introduces major changes to the way Windows Server has operated nearly since its introduction. You need to be prepared.

How to Join the Windows Server Insider Program

To get involved, simply sign up on the Windows Server Insiders page. Unlike the Windows [Desktop] Insiders program, you don’t need to use up a licensed server instance. Windows Server Insider builds use their own keys. You can try out features and report any problems or positive experiences to a special forum just for Insiders. Be aware that the builds are time-bombed, so you will only be able to keep one running for a few months. This software is not built for production use.

Remember that even after Windows Server 2019 goes live, the Insider program will continue. You’ll have the opportunity to preview and help shape the future of LTSC and SAC builds beyond 2019. However, I expect that the Windows Server teams will turn their attention toward the next SAC release after 2019 goes gold, meaning that you likely won’t be getting new GUI-enabled builds until they start working on the post-2019 LTSC release.

Official Release Statements

You can read Microsoft’s notices about each of the builds mentioned in this article:

Summary of Features in Windows Server 2019 Builds 17733, 17738, and 17744

New features included in these builds:

  • Support for HTTP/2
  • Support for Cubic (a “congestion control provider” that helps regulate TCP traffic)
  • Software Defined Networking (SDN) right in Windows Server and controlled by Windows Admin Center — no VMM necessary!
  • SDN high-performance gateways
  • Distributed Cluster Name Objects — allows a CNO to simultaneously hold an IP from each node rather than present a single IP across the entire cluster
  • Specialized bus support for Windows Server containers grants containers the ability to directly utilize SPI, I2C, GPIO, and UART/COM
  • Failover Clustering no longer requires NTLM
  • SMB 1.0 is now disabled by default
  • Windows Subsystem for Linux is part of the build
  • Windows Defender Advanced Threat Protection has been rolled in
  • Windows Server 2019 images ship with version 4.7 of the .Net Framework

Summary of Updated Features in Windows Server 2019 Builds 17733, 17738, and 17744

The following features were previously introduced, but received significant updates in these builds:

  • Cluster Sets: A “cluster set” is essentially a cluster of clusters. They address the scalability limits of clusters without making many changes on the cluster level; smaller organizations can use clusters as they always have while larger organizations can employ new benefits. Build 17733 adds new enhancements for virtual machine placement on hyper-converged cluster sets.
  • Windows Admin Center provides greater control over Hyper-V

Refinements in Windows Server 2019 Builds 17733, 17738, and 17744

Not everything is a feature; sometimes things just need to work better or differently. Microsoft did a few things during the preview cycles that would be inappropriate for a release product. These builds include some of those corrections.

  • The Hyper-V Server SKU no longer needs a product key. This does not mean that you can or should use a preview release of Hyper-V Server 2019 indefinitely.
  • You will now be asked to change your password at initial post-install sign in.
  • Changes to branding during the installation process.

Additional Reading on Windows Server 2019 Updates

We have a lot going on now with Windows Server 2019, but it’s just the leading edge of a long march of new features and improvements. A few links to help you get caught up:

Spend some time discovering and reading up on the new features. There is something in there for just about everyone.

Thoughts on Windows Server 2019 Builds 17733, 17738, and 17744

Overall, my greatest feeling on these builds is excitement — we’re seeing the clear signs that we’re closing in on the final release. I do have a few thoughts on some of the specific features.

Standard Networking Enhancements

I’ve followed a number of these enhancements closely. The HTTP/2 and LEDBAT demonstrations are impressive to watch. I have not yet seen any presentations on Cubic, but it certainly holds a great of promise. Even in my private home systems, I’ve long wanted a way to shape the way that various networking activities utilize my available networking bandwidth.

Hyper-V Networking Enhancements

Modifying the software-defined networking feature so that it can be controlled without VMM or a third-party tool is a huge step. Cloud and hosting providers have great use for SDN, as do large organizations that strain the limits of VLANs. However, SDN provides more than scalability. It also allows for a high degree of isolation. We’ve been able to use private Hyper-V virtual switches for isolation, but those become difficult to use for multiple VMs, especially in clusters. Now, anyone can use SDN.

Specialized Bus Support

Server virtualization solves multiple problems, but we still have a few barriers to virtual-only deployments. Hardware peripherals remain right at the top of those problems. The new bus functionality included in Windows Server containers may present a solution. It won’t be full virtualization, of course. It will, however, grant the ability to run a hardware-dependent container on a general-purpose host.

I should point out that this feature is designed around IoT challenges. We may or may not be able to fit it into existing hardware challenges.

Security Enhancements

If you look through the multitude of feature notes, you’ll find multiple points of hardening in Windows Server 2019. I welcomed two new particular changes in these recent builds:

  • SMB 1.0 disabled by default. Newer features of SMB eclipse version 1.0 in every imaginable way. First and foremost, the security implications of using SMB 1.0 can no longer be ignored. Through 2016, Windows and Windows Server made SMB 1.0 passively available because Windows XP, Windows Server 2003, and some applications require it. Now that Windows XP and Windows Server 2003 have been out of support for several years, Microsoft no longer has any native reason to continue supporting SMB 1.0 by default. If you still have software vendors hanging on to the ’90s, they’ll have to go through extra steps to continue their archaic ways.
  • End of NTLM requirement for Failover Clustering. NTLM is relatively easy to break with modern technologies. Realistically, a cluster’s inter-node communications should be isolated from general network access anyway. However, that does not diminish our need to secure as much as possible. It’s good to see NTLM removed from cluster communications.

Windows Admin Center Enhancements for Hyper-V

I spent some time going through the 1808 version of Windows Admin Center release and noted several changes. The current state of WAC for Hyper-V deserves its own article. However, it would appear that Microsoft has been working on the user experience of this tool. Furthermore, WAC has additional control points over Hyper-V. It’s still not my interface of choice for Hyper-V, but it continues to improve.

Next Steps

I’ll continue bringing news of builds as they release, of course. I would recommend becoming directly acquainted with the advancements in Windows Server 2019 as soon as you can. At this stage of Windows Server’s maturation, new features are complicated enough that they’ll take time to learn. The sooner you get started, the less catch-up you’ll have to look forward to later. Of course, during our October webinar on Windows Server 2019 we will have all the details on the final version of all the features in all these builds – and more! Make sure to save your seat now and don’t miss out on that event!

If you are looking through these enhancements and testing things for yourself, are there any features here that you are most excited about? Anything you feel is over-the-top amazing? Anything you feel is lack-luster? Let us know in the comments section below!

Thank for reading!

This post is part of a series on Windows Server 2019 builds leading up to the release in late 2018. Read more about the release here:

These 3 New Features in Windows Server 2019 Could be Game Changers

Windows Server 2019 Preview: What’s New and What’s Cool

Sneak peek: Windows Server 2019 Insider Preview Build 17666

What’s New in Windows Server 2019 Insider Preview Build 17692

Curious About Windows Server 2019? Here’s the Latest Features Added

These 3 New Features in Windows Server 2019 Could be Game Changers

A new Windows Server Insider Build has been posted and this one contains three exciting new features that could have a huge impact on future Windows Server users. This continues the march toward Windows Server 2019 with a new set of features intended to debut in that forthcoming release.

You can read the official notification for build 17723 here. If you’d like to get into the Windows Server Insider program and test these builds yourself, you can do that here. In the immediately previous build, I recommended that you install directly on hardware to test out the new Hyper-V features. This one would not benefit as much, but it’s good to keep up on in-place upgrades if you can.

Ongoing Testing Request

As Microsoft and I remind you with each new build, they are interested in gathering as much feedback as possible on two fronts:

  • In-place upgrade from WS2012R2 and/or WS2016
  • Compatibility with applications

If you find anything, use the Windows Server Insiders forums to report in.

Build 17723 Feature 1: Expansion of the System Insights Feature

System Insights was introduced with build 17692. This new feature provides a framework for Windows Server to gather data and analyze it for predictive purposes. We can use it for performance trending more easily than using Performance Monitor and related tools.

Build 17723 opens System Insights up to gathering data from any performance counter. It includes access to new features via PowerShell and Windows Admin Center.

Build 17723 Feature 2: Expanded Kubernetes Support

A container is simple to set up and tinker with. Directly managing them at even a small scale quickly becomes tedious. Managing containers in a public cloud can be even more difficult. Kubernetes is one option (known as a “container orchestrator“) for handling containers. Kubernetes can have a presence on a Windows Server installation for managing your on-premises containers. This build improves on Windows Server support for Kubernetes.

I do not use Kubernetes often, so I didn’t push this far. The official blog post announcing the new build includes links to guide you through configuring Kubernetes.

Build 17723 Feature 3: Low Extra Delay Background Transfer (LEDBAT)

LEDBAT is one of those features that we’ve all wanted for quite some time. With LEDBAT, you can place server-side traffic of your choosing into a box, so to speak, so that it must always take a backseat to any other traffic. It will use only whatever bandwidth is left over when nothing else needs to use the network.

While this would be a fun feature to test out and demo, Microsoft already has a fantastic article on the topic. It outlines the problem, gives some examples, explains why other approaches do not work, and demonstrates the feature in action.

Quick Introduction to System Insights

Let’s take a quick look at System Insights. Some of this comes from the official documentation page for System Insights.

Enabling the System Insights Data Gathering Feature

To begin, you need to enable the feature. You can do that with PowerShell:

You can also enable it in Server Manager’s Add Roles and Features Wizard:

When you check that box, it will prompt you to enable the management tools:

Adding the feature or its management tools does not require a reboot.

The management tools are not truly necessary on servers if you will be managing them remotely.

Enabling Windows Admin Center to Poll System Insights

Note: Everything that you see here was performed using the 17723 preview of WAC, which is available at the same location as the Windows Server Insiders download.

To use System Insights graphically, you can employ Windows Admin Center. If your WAC system is in gateway mode, it can pull data from remote systems. You must have the System Insights management tools installed on your WAC system using the above directions. Next, you must enable the plug-in within WAC.

First, access WAC’s settings by clicking the gear icon at the top right of its window:

At the left of the window, under the Gateway menu, choose Extensions:

Ensure that you are on the Available Extensions tab. Find and highlight System Insights. Click the Install button:

You will be prompted to confirm the installation. The plugin will then appear on the Installed Extensions tab.

Viewing System Insights in Windows Admin Center

Once you’ve done the above, Windows Admin Center will display a System Insights link for connected systems. Unfortunately, System Insights is a Windows Server 2019-only feature; the link will display for older operating systems but will not function:

Note: I apologize for the poor scaling of the Windows Admin Center screenshots. I’m getting them as small and focused as I can while maintaining legibility. An ongoing UX problem with Windows Admin Center is that it has been optimized to run full-screen on 4k displays and is horrifically disrespectful of screen real estate on anything less.

Built-In System Insights WAC Demonstration

Let’s start with a look at the insights that ship natively:

As for the error, I couldn’t find anything anywhere with more details. This is a preview, so maybe they’ll address that. We’ll see.

If you click on one of the items, you’ll be taken to a screen that shows a history and forecast chart for the specific item. On my 1680×1050 monitor, I could not do anything to get all of the items into a single screen. So, I’ve broken it down into three parts so that the individual components will scale better.

System Insights Overview Section

At the top of the page is just an overview that repeats what you saw in the table.

System Insights Forecast Section

Next, you have the aforementioned graphs. It takes up almost all of the screen and cannot be shortened vertically. Shrinking your window might cause it to become taller, though. Also, you cannot adjust the forecast term.

Poor UX design notwithstanding, these charts might come in handy. But, we still don’t know what sort of accuracy to expect. Looking at it with my own analytical mindset, I don’t know why it forecasts such a radical change in CPU usage. But, this system has not been collecting data for very long. We’ll see how it does with more information. I would also like to discover if it extends its forecast after collecting more data. A prediction spanning less than 10 days is not terribly useful.

System Insights History Section

Below the chart, you’ll find a listing of data gathering events. This will almost certainly require you to scroll if you want to see it. Fortunately, you probably won’t care about it often. I’m not sure why it’s not on a different tab or something.

System Insights PowerShell Module

The System Insights PowerShell module exposes several cmdlets:

You can use
Get-InsightsCapability to list a host’s installed Insight plugins and
Get-InsightsCapabilityResult to get a text readout that mirrors you saw in WAC:

The new part here should be of interest to systems software developers: you can build your own System Insights plugins and install them with Add-InsightsCapability. You can find details and guides on the official docs page.

Commentary on Build 17723

This release presents some exciting new features for Windows Server 2019 that I look forward to implementing.

Depending on how easy Microsoft makes it to build System Insights plugins, we could find ourselves with a wealth of performance forecasting modules in very short order. Enterprising systems admins might be able to architect their own. Even better, the WAC and PowerShell interfaces work better to view that data than most any other available tool. I still think the user experience in WAC needs a great deal of attention, but that concern is secondary to the capabilities.

Expanded support for Kubernetes shows Microsoft’s ongoing commitment not only to container technologies but to work with outside entities to improve their viability on Windows Server. I would have liked to see more information in the article detailing just what was improved.

I find the new LEDBAT technology to be quite intriguing. We’ll be able to use it to ensure that our critical server applications never become choked out without setting ham-handed restrictions on everything else. I feel that once the community gets hold of this, we’ll see many new applications that enhance our networking experiences.

If you read my commentary on build 17709, you’ll know that I tried out the new in-place upgrade and ran into some frustrating problems. This time, I upgraded from 17709 directly to 17723 without any issues at all. I didn’t need to change a single configuration item. It did tell me that I had to shut down running VMs, but it did that during the pre-flight when I had yet to commit any serious time to the attempt. I don’t know if Microsoft intentionally improved something in the upgrade cycle or if my luck just changed, but I won’t complain.

This post is part of a series on Windows Server 2019 builds leading up to the release in late 2018. Read more about the release here:

Windows Server 2019 Preview: What’s New and What’s Cool

Sneak peek: Windows Server 2019 Insider Preview Build 17666

What’s New in Windows Server 2019 Insider Preview Build 17692

Curious About Windows Server 2019? Here’s the Latest Features Added

Hyper-V Quick Tip: How to Enable Nested Virtualization

Q: How Do I Enable Nested Virtualization for Hyper-V Virtual Machines

A: Pass $true for Set-VMProcessor’s “ExposeVirtualizationExtensions” parameter

In its absolute simplest form:

Set-VMProcessor has several other parameters which you can view in its online help.

As shown above, the first parameter is positional, meaning that it guesses that I supplied a virtual machine’s name because it’s in the first slot and I didn’t tell it otherwise. For interactive work, that’s fine. In scripting, try to always fully-qualify every parameter so that you and other maintainers don’t need to guess:

The Set-VMProcessor cmdlet also accepts pipeline input. Therefore, you can do things like:

Requirements for Nested Virtualization

In order for nested virtualization to work, you must meet all of the following:

  • The Hyper-V host must be at least the Anniversary Edition version of Windows 10, Windows Server 2016, Hyper-V Server 2016, or Windows Server Semi-Annual Channel
  • The Hyper-V host must be using Intel CPUs. AMD is not yet supported
  • A virtual machine must be off to have its processor extensions changed

No configuration changes are necessary for the host.

Microsoft only guarantees that you can run Hyper-V nested within Hyper-V. Other hypervisors may work, but you will not receive support either way. You may have mixed results trying to run different versions of Hyper-V. I am unaware of any support statement on this, but I’ve had problems running mismatched levels of major versions.

Memory Changes for Nested Virtual Machines

Be aware that a virtual machine with virtualization extensions exposed will always use its configured value for Startup memory. You cannot use Dynamic Memory, nor can you change the virtual machine’s fixed memory while it is running.

Remember, as always, I’m here to help, so send me any questions you have on this topic using the question form below and I’ll get back to you as soon as I can.

More Hyper-V Quick Tips from Eric:

Hyper-V Quick Tip: How to Choose a Live Migration Performance Solution

Hyper-V Quick Tip: How Many Cluster Networks Should I Use?

Hyper-V HyperClear Mitigation for L1 Terminal Fault

Introduction

A new speculative execution side channel vulnerability was announced recently that affects a range of Intel Core and Intel Xeon processors. This vulnerability, referred to as L1 Terminal Fault (L1TF) and assigned CVE 2018-3646 for hypervisors, can be used for a range of attacks across isolation boundaries, including intra-OS attacks from user-mode to kernel-mode as well as inter-VM attacks. Due to the nature of this vulnerability, creating a robust, inter-VM mitigation that doesn’t significantly degrade performance is particularly challenging.

For Hyper-V, we have developed a comprehensive mitigation to this attack that we call HyperClear. This mitigation is in-use by Microsoft Azure and is available in Windows Server 2016 and later. The HyperClear mitigation continues to allow for safe use of SMT (hyper-threading) with VMs and, based on our observations of deploying this mitigation in Microsoft Azure, HyperClear has shown to have relatively negligible performance impact.

We have already shared the details of HyperClear with industry partners. Since we have received questions as to how we are able to mitigate the L1TF vulnerability without compromising performance, we wanted to broadly share a technical overview of the HyperClear mitigation and how it mitigates L1TF speculative execution side channel attacks across VMs.

Overview of L1TF Impact to VM Isolation

As documented here, the fundamental premise of the L1TF vulnerability is that it allows a virtual machine running on a processor core to observe any data in the L1 data cache on that core.

Normally, the Hyper-V hypervisor isolates what data a virtual machine can access by leveraging the memory address translation capabilities provided by the processor. In the case of Intel processors, the Extended Page Tables (EPT) feature of Intel VT-x is used to restrict the system physical memory addresses that a virtual machine can access.

Under normal execution, the hypervisor leverages the EPT feature to restrict what physical memory can be accessed by a VM’s virtual processor while it is running. This also restricts what data the virtual processor can access in the cache, as the physical processor enforces that a virtual processor can only access data in the cache corresponding to system physical addresses made accessible via the virtual processor’s EPT configuration.

By successfully exploiting the L1TF vulnerability, the EPT configuration for a virtual processor can be bypassed during the speculative execution associated with this vulnerability. This means that a virtual processor in a VM can speculatively access anything in the L1 data cache, regardless of the memory protections configured by the processor’s EPT configuration.

Intel’s Hyper-Threading (HT) technology is a form of Simultaneous MultiThreading (SMT). With SMT, a core has multiple SMT threads (also known as logical processors), and these logical processors (LPs) can execute simultaneously on a core. SMT further complicates this vulnerability, as the L1 data cache is shared between sibling SMT threads of the same core. Thus, a virtual processor for a VM running on a SMT thread can speculatively access anything brought into the L1 data cache by its sibling SMT threads. This can make it inherently unsafe to run multiple isolation contexts on the same core. For example, if one logical processor of a SMT core is running a virtual processor from VM A and another logical processor of the core is running a virtual processor from VM B, sensitive data from VM B could be seen by VM A (and vice-versa).

Similarly, if one logical processor of a SMT core is running a virtual processor for a VM and the other logical processor of the SMT core is running in the hypervisor context, the guest VM could speculatively access sensitive data brought into the cache by the hypervisor.

Basic Inter-VM Mitigation

To mitigate the L1TF vulnerability in the context of inter-VM isolation, the most straightforward mitigation involves two key components:

  1. Flush L1 Data Cache On Guest VM Entry – Every time the hypervisor switches a processor thread (logical processor) to execute in the context of a guest virtual processor, the hypervisor can first flush the L1 data cache. This ensures that no sensitive data from the hypervisor or previously running guest virtual processors remains in the cache. To enable the hypervisor to flush the L1 data cache, Intel has released updated microcode that provides an architectural facility for flushing the L1 data cache.
  2. Disable SMT – Even with flushing the L1 data cache on guest VM entry, there is still the risk that a sibling SMT thread can bring sensitive data into the cache from a different security context. To mitigate this, SMT can be disabled, which ensures that only one thread ever executes on a processor core.

The L1TF mitigation for Hyper-V prior to Windows Server 2016 employs a mitigation based on these components. However, this basic mitigation has the major downside that SMT must be disabled, which can significantly reduce the overall performance of a system. Furthermore, this mitigation can result in a very high rate of L1 data cache flushes since the hypervisor may switch a thread between the guest and hypervisor contexts many thousands of times a second. These frequent cache flushes can also degrade the performance of the system.

HyperClear Inter-VM Mitigation

To address the downsides of the basic L1TF Inter-VM mitigation, we developed the HyperClear mitigation. The HyperClear mitigation relies on three key components to ensure strong Inter-VM isolation:

  1. Core Scheduler
  2. Virtual-Processor Address Space Isolation
  3. Sensitive Data Scrubbing

Core Scheduler

The traditional Hyper-V scheduler operates at the level of individual SMT threads (logical processors). When making scheduling decisions, the Hyper-V scheduler would schedule a virtual processor onto a SMT thread, without regards to what the sibling SMT threads of the same core were doing. Thus, a single physical core could be running virtual processors from different VMs simultaneously.

Starting in Windows Server 2016, Hyper-V introduced a new scheduler implementation for SMT systems known as the “Core Scheduler“. When the Core Scheduler is enabled, Hyper-V schedules virtual cores onto physical cores. Thus, when a virtual core for a VM is scheduled, it gets exclusive use of a physical core, and a VM will never share a physical core with another VM.

With the Core Scheduler, a VM can safely take advantage of SMT (Hyper-Threading). When a VM is using SMT, the hypervisor scheduling allows the VM to use all the SMT threads of a core at the same time.

Thus, the Core Scheduler provides the essential protection that a VM’s data won’t be directly disclosed across sibling SMT threads. It protects against cross-thread data exposure of a VM since two different VMs never run simultaneously on different threads of the same core.

However, the Core Scheduler alone is not sufficient to protect against all forms of sensitive data leakage across SMT threads. There is still the risk that hypervisor data could be leaked across sibling SMT threads.

Virtual-Processor Address Space Isolation

SMT Threads on a core can independently enter and exit the hypervisor context based on their activity. For example, events like interrupts can cause a SMT thread to switch out of running the guest virtual processor context and begin executing the hypervisor context. This can happen independently for each SMT thread, so one SMT thread may be executing in the hypervisor context while its sibling SMT thread is still running a VM’s guest virtual processor context. An attacker running code in the less trusted guest VM virtual processor context on one SMT thread can then use the L1TF side channel vulnerability to potentially observe sensitive data from the hypervisor context running on the sibling SMT thread.

One potential mitigation to this problem is to coordinate hypervisor entry and exit across SMT threads of the same core. While this is effective in mitigating the information disclosure risk, this can significantly degrade performance.

Instead of coordinating hypervisor entry and exits across SMT threads, Hyper-V employs strong data isolation in the hypervisor to protect against a malicious guest VM leveraging the L1TF vulnerability to observe sensitive hypervisor data. The Hyper-V hypervisor achieves this isolation by maintaining separate virtual address spaces in the hypervisor for each guest SMT thread (virtual processor). When the hypervisor context is entered on a specific SMT thread, the only data that is addressable by the hypervisor is data associated with the guest virtual processor associated with that SMT thread. This is enforced through the hypervisor’s page table selectively mapping only the memory associated with the guest virtual processor. No data for any other guest virtual processor is addressable, and thus, the only data that can be brought into the L1 data cache by the hypervisor is data associated with that current guest virtual processor.

Thus, regardless of whether a given virtual processor is running in the guest VM virtual processor context or in the hypervisor context, the only data that can be brought into the cache is data associated with the active guest virtual processor. No additional privileged hypervisor secrets or data from other guest virtual processors can be brought into the L1 data cache.

This strong address space isolation provides two distinct benefits:

  1. The hypervisor does not need to coordinate entry and exits into the hypervisor across sibling SMT threads. So, SMT threads can enter and exit the hypervisor context independently without any additional performance overhead.
  2. The hypervisor does not need to flush the L1 data cache when entering the guest VP context from the hypervisor context. Since the only data that can be brought into the cache while executing in the hypervisor context is data associated with the guest virtual processor, there is no risk of privileged/private state in the cache that needs to be protected from the guest. Thus, with this strong address space isolation, the hypervisor only needs to flush the L1 data cache when switching between virtual cores on a physical core. This is much less frequent than the switches between the hypervisor and guest VP contexts.

Sensitive Data Scrubbing

There are cases where virtual processor address space isolation is insufficient to ensure isolation of sensitive data. Specifically, in the case of nested virtualization, a single virtual processor may itself run multiple guest virtual processors. Consider the case of a L1 guest VM running a nested hypervisor (L1 hypervisor). In this case, a virtual processor in this L1 guest may be used to run nested virtual processors for L2 VMs being managed by the L1 nested hypervisor.

In this case, the nested L1 guest hypervisor will be context switching between each of these nested L2 guests (VM A and VM B) and the nested L1 guest hypervisor. Thus, a virtual processor for the L1 VM being maintained by the L0 hypervisor can run multiple different security domains – a nested L1 hypervisor context and one or more L2 guest virtual machine contexts. Since the L0 hypervisor maintains a single address space for the L1 VM’s virtual processor, this address space could contain data for the nested L1 guest hypervisor and L2 guests VMs.

To ensure a strong isolation boundary between these different security domains, the L0 hypervisor relies on a technique we refer to as state scrubbing when nested virtualization is in-use. With state scrubbing, the L0 hypervisor will avoid caching any sensitive guest state in its data structures. If the L0 hypervisor must read guest data, like register contents, into its private memory to complete an operation, the L0 hypervisor will overwrite this memory with 0’s prior to exiting the L0 hypervisor context. This ensures that any sensitive L1 guest hypervisor or L2 guest virtual processor state is not resident in the cache when switching between security domains in the L1 guest VM.

For example, if the L1 guest hypervisor accesses an I/O port that is emulated by the L0 hypervisor, the L0 hypervisor context will become active. To properly emulate the I/O port access, the L0 hypervisor will have to read the current guest register contents for the L1 guest hypervisor context, and these register contents will be copied to internal L0 hypervisor memory. When the L0 hypervisor has completed emulation of the I/O port access, the L0 hypervisor will overwrite any L0 hypervisor memory that contains register contents for the L1 guest hypervisor context. After clearing out its internal memory, the L0 hypervisor will resume the L1 guest hypervisor context. This ensures that no sensitive data stays in the L0 hypervisor’s internal memory across invocations of the L0 hypervisor context. Thus, in the above example, there will not be any sensitive L1 guest hypervisor state in the L0 hypervisor’s private memory. This mitigates the risk that sensitive L1 guest hypervisor state will be brought into the data cache the next time the L0 hypervisor context becomes active.

As described above, this state scrubbing model does involve some extra processing when nested virtualization is in-use. To minimize this processing, the L0 hypervisor is very careful in tracking when it needs to scrub its memory, so it can do this with minimal overhead. The overhead of this extra processing is negligible in the nested virtualization scenarios we have measured.

Finally, the L0 hypervisor state scrubbing ensures that the L0 hypervisor can efficiently and safely provide nested virtualization to L1 guest virtual machines. However, to fully mitigate inter-VM attacks between L2 guest virtual machines, the nested L1 guest hypervisor must implement a mitigation for the L1TF vulnerability. This means the L1 guest hypervisor needs to appropriately manage the L1 data cache to ensure isolation of sensitive data across the L2 guest virtual machine security boundaries. The Hyper-V L0 hypervisor exposes the appropriate capabilities to L1 guest hypervisors to allow L1 guest hypervisors to perform L1 data cache flushes.

Conclusion

By using a combination of core scheduling, address space isolation, and data clearing, Hyper-V HyperClear is able to mitigate the L1TF speculative execution side channel attack across VMs with negligible performance impact and with full support of SMT.

Bringing Device Support to Windows Server Containers

When we introduced containers to Windows with the release of Windows Server 2016, our primary goal was to support traditional server-oriented applications and workloads. As time has gone on, we’ve heard feedback from our users about how certain workloads need access to peripheral devices—a problem when you try to wrap those workloads in a container. We’re introducing support for select host device access from Windows Server containers, beginning in Insider Build 17735 (see table below).

We’ve contributed these changes back to the Open Containers Initiative (OCI) specification for Windows. We will be submitting changes to Docker to enable this functionality soon. Watch the video below for a simple example of this work in action (hint: maximize the video).

What’s Happening

To provide a simple demonstration of the workflow, we have a simple client application that listens on a COM port and reports incoming integer values (powershell console on the right). We did not have any devices on hand to speak over physical COM, so we ran the application inside of a VM and assigned the VM’s virtual COM port to the container. To mimic a COM device, an application was created to generate random integer values and send it over a named pipe to the VM’s virtual COM port (this is the powershell console on the left).

As we see in the video at the beginning, if we do not assign COM ports to our container, when the application runs in the container and tries to open a handle to the COM port, it fails with an IOException (because as far as the container knew, the COM port didn’t exist!). On our second run of the container, we assign the COM port to the container and the application successfully gets and prints out the incoming random ints generated by our app running on the host.

How It Works

Let’s look at how it will work in Docker. From a shell, a user will type:

docker run --device="/"

For example, if you wanted to pass a COM port to your container:

docker run --device="class/86E0D1E0-8089-11D0-9CE4-08003E301F73" mcr.microsoft.com/windowsservercore-insider:latest

The value we’re passing to the device argument is simple: it looks for an IdType and an Id. For this coming release of Windows , we only support an IdType of “class”. For Id, this is  a device interface class GUID. The values are delimited by a slash, “/”.  Whereas  in Linux a user assigns individual devices by specifying a file path in the “/dev/” namespace, in Windows we’re adding support for a user to specify an interface class, and all devices which identify as implementing this class   will be plumbed into the container.

If a user wants to specify multiple classes to assign to a container:

docker run --device="class/86E0D1E0-8089-11D0-9CE4-08003E301F73" --device="class/DCDE6AF9-6610-4285-828F-CAAF78C424CC" --device="…" mcr.microsoft.com/windowsservercore-insider:latest

What are the Limitations?

Process isolation only: We only support passing devices to containers running in process isolation; Hyper-V isolation is not supported, nor do we support host device access for Linux Containers on Windows (LCOW).

We support a distinct list of devices: In this release, we targeted enabling a specific set of features and a specific set of host device classes. We’re starting with simple buses. The complete list that we currently support  is  below.

Device Type Interface Class  GUID
GPIO 916EF1CB-8426-468D-A6F7-9AE8076881B3
I2C Bus A11EE3C6-8421-4202-A3E7-B91FF90188E4
COM Port 86E0D1E0-8089-11D0-9CE4-08003E301F73
SPI Bus DCDE6AF9-6610-4285-828F-CAAF78C424CC

Stay tuned for a Part 2 of this blog that explores the architectural decisions we chose to make in Windows to add this support.

What’s Next?

We’re eager to get your feedback. What specific devices are most interesting for you and what workload would you hope to accomplish with them? Are there other ways you’d like to be able to access devices in containers? Leave a comment below or feel free to tweet at me.

Cheers,

Craig Wilhite (@CraigWilhite)

How to Monitor Hyper-V Performance with PowerShell

Virtual machines can quickly lose speed and efficiency unless managed properly. Using PowerShell, you can monitor Hyper-V performance so you can keep on top of your performance levels and ensure your Hyper-V VMs are running optimally at all times.

In my last article, I demonstrated how to work with performance counters but from a WMI (Windows Management Instrumentation) perspective, using the corresponding Win32 classes with Get-CimInstance. Today I want to circle back to using Get-Counter to retrieve performance counter information but as part of a toolmaking process. I expect that when you are looking at performance counters, you do so on a very granular level. That is, you are only interested in data from a specific counter. I am too. In fact, I want to develop some tooling around a performance counter so that I can quickly get the information I need.

Getting Started

I’m using Hyper-V running on my Windows 10 desktop, but there’s no reason you can’t substitute your own Hyper-V host.

You should be able to test my code by setting your own value for $Computer.

Hyper-V Performance Counters

Of all the Hyper-V performance counters, the one that interests me is part of the Hyper-V Dynamic Memory VM set.

Dynamic Memory Counters

I am especially interested in the pressure related counters. This should give me an indication if the virtual machine is running low on memory. You sometimes see this in the Hyper-V management console when you look at the memory tab for a given virtual machine. Sometimes you’ll see a Low status. I want to be able to monitor these pressure levels from PowerShell.

After a little research, I found the corresponding WMI class.

Memory Counters via WMI and CIM

As you can see SRV2 is running a bit high. One of the benefits of using a WMI class instead of Get-Counter is that I can create a filter.

High Memory Pressure VM

Building Tools With What We’ve Done So Far

One tool I could create would be to turn this one line command into a function, perhaps adding the Hyper-V host as a parameter. I could set the function to run in a PowerShell scheduled job.

Another option would be to register a WMI event subscription. This is an advanced topic that we don’t have room to cover in great detail. But here is some sample code.

The code is checking every 30 seconds (within 30) for instances of the performance counter where the current pressure value is greater or equal to 80. I am registering the event subscription on my computer.  As long as my PowerShell session is open, any time a VM goes above 80 for Current Pressure, information is logged to a CSV file.

When using an Action scriptblock, you won’t see when the event is raised with Get-Event. The only way I can tell is by looking at the CSV file.

image

To manually stop watching, simply unregister the event.

Using this kind of event subscription has a number of other applications when it comes to managing Hyper-V. I expect I’ll revisit this topic again.

But there’s one more technique I want to share before we wrap up for today.

Usually, I am a big believer in taking advantage of PowerShell objects in the pipeline. Using Write-Host is generally frowned upon. But there are always exceptions and here is one of them.  I want a quick way to tell if a virtual machine is under pressure. Color coding will certainly catch my eye.  Instead of writing objects to the pipeline, I’ll write a string of information to the console. But I will color code it depending on the value of CurrentPressure. You will likely want to set your own thresholds. I wanted settings so that I’d have something good to display.

It wouldn’t take much to turn this into a function and create a reusable tool.

Colorized Performance Counters

I have at least one other performance monitoring tool technique I want to share with you but I think I’ve given you plenty to try out for today so I’ll cover that in my next article.

Wrap-Up

Have you built any custom tools for your Hyper-V environment? Do you find these types of tools helpful? Would you like us to do more? Let us know in the comments section below!

Thanks for reading!

Curious About Windows Server 2019? Here’s the Latest Features Added

Microsoft continues adding new features to Windows Server 2019 and cranking out new builds for Windows Server Insiders to test. Build 17709 has been announced, and I got my hands on a copy. I’ll show you a quick overview of the new features and then report my experiences.

If you’d like to get into the Insider program so that you can test out preview builds of Windows Server 2019 yourself, sign up on the Insiders page.

Ongoing Testing Requests

If you’re just now getting involved with the Windows Server Insider program or the previews for Windows Server 2019, Microsoft has asked all testers to try a couple of things with every new build:

  • In-place upgrade
  • Application compatibility

You can use virtual machines with checkpoints to easily test both of these. This time around, I used a physical machine, and my upgrade process went very badly. I have not been as diligent about testing applications, so I have nothing of importance to note on that front.

Build 17709 Feature 1: Improvements to Group Managed Service Accounts for Containers

I would bet that web applications are the primary use case for containers. Nothing else can match containers’ ability to strike a balance between providing version-specific dependencies while consuming minimal resources. However, containerizing a web application that depends on Active Directory authentication presents special challenges. Group Managed Service Accounts (gMSA) can solve those problems, but rarely without headaches. 17709 includes these improvements for gMSAs:

  • Using a single gMSA to secure multiple containers should produce fewer authentication errors
  • A gMSA no longer needs to have the same name as the system that host the container(s)
  • gMSAs should now work with Hyper-V isolated containers

I do not personally use enough containers to have meaningful experience with gMSA. I did not perform any testing on this enhancement.

Build 17709 Feature 2: A New Windows Server Container Image with Enhanced Capabilities

If you’ve been wanting to run something in a Windows Server container but none of the existing images meet your prerequisites, you might have struck gold in this release. Microsoft has created a new Windows Server container image with more components. I do not have a complete list of those components, but you can read what Lars Iwer has to say about it. He specifically mentions:

  • Proofing tools
  • Automated UI tests
  • DirectX

As I read that last item, I instantly wanted to know: “Does that mean GUI apps from within containers?” Well, according to the comments on the announcement, yes*. You just have to use “Session 0”. That means that if you RDP to the container host, you must use the /admin switch with MSTSC. Alternatively, you can use the physical console or an out-of-band console connection application.

Commentary on Windows Server 2019 Insider Preview Build 17709

So far, my experiences with the Windows Server 2019 preview releases have been fairly humdrum. They work as advertised, with the occasional minor glitch. This time, I spent more time than normal and hit several frustration points.

In-Place Upgrade to 17709

Ordinarily, I test preview upgrades in a virtual machine. Sure, I use checkpoints with the intent of reverting if something breaks. But, since I don’t do much in those virtual machines, they always work. So, I never encounter anything to report.

For 17709, I wanted to try out the container stuff, and I wanted to do it on hardware. So, I attempted an in-place upgrade of a physical host. It was disastrous.

Errors While Upgrading

First, I got a grammatically atrocious message that contained false information. I wish that I had saved it so I could share with others that might encounter it, but I must have accidentally my notes. the message started out with “Something happened” (it didn’t say what happened, of course), then asked me to look in an XML file for information. Two problems with that:

  1. I was using a Server Core installation. I realize that I am not authorized to speak on behalf of the world’s Windows administrators, but I bet no one will get at mad at me for saying, “No one in the world wants to read XML files on Server Core.”
  2. The installer didn’t even create the file.

I still have not decided which of those two things irritates me the most. Why in the world would anyone actively decide to build the upgrade tool to behave that way?

Problems While Trying to Figure Out the Error

Well, I’m fairly industrious, so I tried to figure out what was wrong. The installer did not create the XML file that it talked about, but it did create a file called “setuperr.log”. I didn’t keep the entire contents of that file either, but it contained only one line error-wise that seemed to have any information at all: “CallPidGenX: PidGenX function failed on this product key”. Do you know what that means? I don’t know what that means. Do you know what to do about it? I don’t know what to do about it. Is that error even related to my problem? I don’t even know that much.

I didn’t find any other traces or logs with error messages anywhere.

How I Fixed My Upgrade Problem

I began by plugging the error messages into Internet searches. I found only one hit with any useful information. The suggestions were largely useless. But, the guy managed to fix his own problem by removing the system from the domain. How in the world did he get from that error message to disjoining the domain? Guesswork, apparently. Well, I didn’t go quite that far.

My “fix”: remove the host from my Hyper-V cluster. The upgrade worked after that.

Why did I put the word “fix” in quotation marks? Because I can’t tell you that actually fixed the problem. Maybe it was just a coincidence. The upgrade’s error handling and messaging was so horrifically useless that without duplicating the whole thing, I cannot conclusively say that one action resulted in the other. “Correlation is not causation”, as the saying goes.

Feedback for In-Place Upgrades

At some point, I need to find a productive way to express this to Microsoft. But for now, I’m upset and frustrated at how that went. Sure, it only took you a few minutes to read what I had to say. It took much longer for me to retry, poke around, search, and prod at the thing until it worked, and I had no idea that it was ever going to work.

Sure, once the upgrade went through, everything was fine. I’m quite happy with the final product. But if I were even to start thinking about upgrading a production system and I thought that there was even a tiny chance that it would dump me out at the first light with some unintelligible gibberish to start a luck-of-the-draw scavenger hunt, then there is a zero percent chance that I would even attempt an upgrade. Microsoft says that they’re working to improve the in-place upgrade experience, but the evidence I saw led me to believe that they don’t take this seriously at all. XML files? XML files that don’t even get created? Error messages that would have set off 1980s-era grammar checkers? And don’t even mean anything? This is the upgrade experience that Microsoft is anxious to show off? No thanks.

Microsoft: the world wants legible, actionable error messages. The world does not want to go spelunking through log files for vague hints. That’s not just for an upgrade process either. It’s true for every product, every time.

The New Container Image

OK, let’s move on to some (more) positive things. Many of the things that you’ll see in this section have been blatantly stolen from Microsoft’s announcement.

Once my upgrade went through, I immediately started pulling down the new container image. I had a bit of difficulty with that, which Lars Iwer of Microsoft straightened out quickly. If you’re trying it out, you can get the latest image with the following:

Since Insider builds update frequently, you might want to ensure that you only get the build version that matches your host version (if you get a version mismatch, you’ll be forced to run the image under Hyper-V isolation). Lars Iwer provided the following script (stolen verbatim from the previously linked article, I did not write this or modify it):

Trying Out the New Container Image

I was able to easily start up a container and poke around a bit:

Testing out the new functionality was a bit tougher, though. It solves problems that I personally do not have. Searching the Internet for, “example apps that would run in a Windows Server container if Microsoft had included more components” didn’t find anything I could test with either (That was a joke; I didn’t really do that. As far as you know). So, I first wrote a little GUI .Net app in Visual Studio.

*Graphical Applications in the New Container Image

Session 0 does not seem to be able to show GUI apps from the new container image. If you skimmed up to this point and you’re about to tell me that GUI apps don’t show anything from Windows containers, this links back to the (*) text above. The comments section of the announcement article indicate that graphical apps in the new container will display on session 0 of the container host.

I don’t know if I did something wrong, but nothing that I did would show me a GUI from within the new container style. The app ran just fine — it shows up under Get-Process — but it never shows anything. It does exactly the same thing under microsoft/dotnet-framework in Hyper-V isolation mode, though. So, on that front, the only benefit that I could verify was that I did not need to run my .Net app in Hyper-V isolation mode or use a lot of complicated FROM nesting in my dockerfile. Still no GUI, though, and that was part of my goal.

DirectX Applications in the New Container Image

After failing to get my graphical .Net app to display, I next considered DirectX. I personally do not know how to write even a minimal DirectX app. But, I didn’t need to. Microsoft includes the very first DirectX-dependent app that I was ever able to successfully run: dxdiag.

Sadly, dxdiag would not display on session 0 from my container, either. Just as with my .Net app, it appeared in the local process list and docker top. But, no GUI that I could see.

However, dxdiag did run successfully, and would generate an output file:

Notes for anyone trying to duplicate the above:

  • I started this particular container with 
    docker run it mcr.microsoft.com/windowsinsider
  • DXDiag does not instantly create the output file. You have to wait a bit.

Thoughts on the New Container Image

I do wish that I had more experience with containers and the sorts of problems this new image addresses. Without that, I can’t say much more than, “Cool!” Sure, I didn’t personally get the graphical part to work, but a DirectX app from with a container? That’s a big deal.

Overall Thoughts on Windows Server 2019 Preview Build 17709

Outside of the new features, I noticed that they have corrected a few glitchy things from previous builds. I can change settings on network cards in the GUI now and I can type into the Start menu to get Cortana to search for things. You can definitely see changes in the polish and shine as we approach release.

As for the upgrade process, that needs lots of work. If a blocking condition exists, it needs to be caught in the pre-flight checks and show a clear error message. Failing partway into the process with random pseudo-English will extend distrust of upgrading Microsoft operating systems for another decade. Most established shops already have an “install-new-on-new-hardware-and-migrate” process. I certainly follow one. My experience with 17709 tells me that I need to stick with it.

I am excited to see the work being done on containers. I do not personally have any problems that this new image solves, but you can clearly see that customer feedback led directly to its creation. Whether I personally benefit or not, this is a good thing to see.

Overall, I am pleased with the progress and direction of Windows Server 2019. What about you? How do you feel about the latest features? Let me know in the comments below!

How to Create Automated Hyper-V Performance Reports

Wouldn’t it be nice to periodically get an automatic performance review of your Hyper-V VMs? Well, this blog post shows you how to do exactly that.

Hyper-V Performance Counters & Past Material

Over the last few weeks, I’ve been working with Hyper-V performance counters and PowerShell, developing new reporting tools. I thought I’d write about Hyper-V Performance counters here until I realized I already have.
https://www.altaro.com/hyper-v/performance-counters-hyper-v-and-powershell-part-1/
https://www.altaro.com/hyper-v/hyper-v-performance-counters-and-powershell-part-2/
Even though I wrote these articles several years ago, nothing has really changed. If you aren’t familiar with Hyper-V performance counters I encourage you to take a few minutes and read these. Otherwise, some of the material in this article might not make sense.

Get-CimInstance

Normally, using Get-Counter is a better approach, especially if you want to watch performance over a given timespan. But sometimes you just want a quick point in time snapshot. Or you may have network challenges. As far as I can tell Get-Counter uses legacy networking protocols, i.e. RPC and DCOM. This does not make them very firewall friendly. You could use PowerShell Remoting and Invoke-Command to run Get-Counter on the remote server. Or you can use Get-CimInstance which is what I want to cover in this article.

When you run Get-Counter, you are actually querying performance counter classes in WMI. This means you can get the same information using Get-CimInstance, or Get-WmiObject. But because we want to leverage WSMan and PowerShell Remoting, we’ll stick with the former.

Building a Hyper-V Performance Report

First, we need to identify the counter classes. I’ll focus on the classes that have “cooked” or formatted data.

I’m setting a variable for the computername so that you can easily re-use the code. I’m demonstrating this on a Windows 10 desktop running Hyper-V but you can just as easily point $Computer to a Hyper-V host.

It is pretty easy to leverage the PowerShell pipeline and create a report for all Hyper-V performance counters.

The text file will list each performance counter class followed by all instances of that class. If you run this code, you’ll see there are a number of properties that won’t have any values. It might help to filter those out. Here’s a snippet of code that is a variation on the text file. This code creates an HTML report, skipping properties that likely will have no value.

This code creates an HTML report using fragments. I also am dynamically deciding to create a table or a list based on the number of properties.

HTML Performance Counter Report

Thus far I’ve been creating reports for all performance counters and all instances. But you might only be interested in a single virtual machine. This is a situation where you can take advantage of WMI filtering.

In looking at the output from all classes, I can see that the Name property on these classes can include the virtual machine name as part of the value. So I will go through every class and filter only for instances that contain the name of VM I want to monitor.

This example also adds a footer to the report showing when it was created.

HTML Performance Report for a Single VM

It doesn’t take much more effort to create a report for each virtual machine. I turned my code into the beginning of a usable PowerShell function, designed to take pipeline input.

With this function, I can query as many virtual machines and create a performance report for each.

You can take this idea a step further and run this as a PowerShell scheduled job, perhaps saving the report files to an internal team web server.

I have at least one other intriguing PowerShell technique for working with Hyper-V performance counters, but I think I’ve given you enough to work with for today so I’ll save it until next time.

Wrap-Up

Did you find this useful? Have you done something similar and used a different method? Let us know in the comments section below!

Thanks for Reading!

The Complete Guide to Azure Virtual Machines: Part 2

This is part 2 of our Azure Virtual Machines Guide following our previous article Introduction to Azure Virtual Machines. In part 1 we created a virtual network for our VM, now we will create a network security group and finally deploy our VM.

Creating the Network Security Group (NSG)

A network security group is like the firewall for our VM, it is required in order to provide any access to our VM, so it’s important we set this up before deploying one. To create one, we simply select Create a resource on the left-hand side of the Azure management portal and type in “Network Security Group”. We will be presented with the proper blade to create one, so click Create:

Now we need to fill in some fields to create our NSG. For this example I’ll name our NSG “LukeLabNSG”, then we will select the subscription that we want to use this NSG on as well as the resource group. Then we will select the location of the Azure data center that this NSG will be located at. Once everything is filled out we click Create:

We wait for the NSG to deploy and once completed, we can view it by clicking on All Services on the left-hand side and selecting Network Security Groups:

We can now see our new NSG, and we can further configure it by clicking on the name:

We need to assign a subnet to associate this NSG with, select Subnets on the left-hand side:

Now click the Associate button so we can find our subnet and the virtual network that we created in part 1. Remember, we created this when we set up the Virtual Network:

We can now see that we have the LukeLabVnet1 virtual network that we created and the LukeLabSubnet assigned to this network security group. Click Ok to configure:

Select Inbound security rules on the left-hand side. We want to enable RDP access to this VM so that we can connect to it. Also note that for the purpose of this demo we are going to allow RDP access via the public internet, however, for a production environment this is not best practice. In a production environment, you would set up a VPN connection and use RDP over the VPN as it is much more secure. To create our new rule we will select the Add button:

If we wanted to do any sort of advanced configuration of allowing specific ports we could input the information in these fields here, however since we are just doing RDP and it is a common port, Microsoft has already created a list of commonly used ports so that we can easily select enable them. To do this, we will click the basic button at the top:

Now we simply select RDP from the Service drop-down list and the proper information will automatically be filled in. Then we put in a description of the rule and select Add. Also, note that Azure gives us the same warning about exposing RDP to the internet:

Now we’ve set up our NSG, we can finally deploy our VM.

Deploying a Virtual Machine

Now that we have our Virtual Network and Network Security Group created, we are ready to deploy the Virtual Machine. To do this, select the Create a resource button on the left-hand side and type in Windows Server 2016 Datacenter. Select the Windows Server 2016 Datacenter from the list and select Create:

Now we need to fill out the form shown here to configure our Virtual Machine. For the purposes of this demo, I named mine “LukeLabVM01”. You also need to give it a username and password (use a strong password!). We’ll select the resource group and the Azure data center location where this VM will be hosted at. “East US” in this case. Clicking Ok will then bring us to the next step:

Select the compute size of the VM that you would like to deploy. The estimated pricing is on the right-hand side:

NOTE: The pricing shown here is for compute costs only. If you need a more detailed breakdown, take a look at the Azure Pricing Calculator

Now we need to fill in the last set of configuration settings. We need to create an availability set, this is very important to understand because it cannot be changed unless the VM is rebuilt. (I’ll be putting together a future post on working with availability sets, so stay tuned for that!). In this example, we’ve simply created an availability set here during the deployment process and named it LukeLabAS1. We then assign our virtual network and subnet that we created in part 1:

Under Network Security Group, click Advanced and select the NSG that we created in the previous steps. Then click OK to verify the settings:

If all of the settings pass the verification process, we now are given the option to deploy the VM. Click Create and we will need to wait for the VM to finish deploying.

Once the deployment process is finished, we can see the newly created VM under Virtual Machines. Click Start to power on the VM if it is not already running:

Then click on the VM name and select Connect at the top to get connected to the VM:

Azure gives us two options, SSH or RDP. In this demo we will RDP to the VM, so select the RDP tab and click on Download RDP file:

Once the RDP file is downloaded, open it up, select connect and input the credentials that we made when we configured the VM:

Now we have access to our VM, and I’ve verified that the hostname of the VM is the one we specified in the deployment settings by bringing up a command prompt:

Wrap-Up

The flexibility of the cloud allows us to stand up Virtual Machines very quickly and it can be a very advantageous solution for applications that need to scale out on massive levels, or situations where investing in hardware doesn’t make sense due to the longevity of the application. However, there is a steep learning curve when it comes to building and managing cloud resources and being aware of each component is critical to the success of running your workloads in the cloud.

What have your experiences with Azure VMs been like so far? Have you found they fit well in your playbook? Have you experienced difficulties? Have questions? Let us know in the comments section below!