Category Archives: Hyper-V

How to Resize Virtual Hard Disks in Hyper-V 2016

We get lots of cool tricks with virtualization. Among them is the ability to change our minds about almost any provisioning decision. In this article, we’re going to examine Hyper-V’s ability to resize virtual hard disks. Both Hyper-V Server (2016) and Client Hyper-V (Windows 10) have this capability.

Requirements for Hyper-V Disk Resizing

If we only think of virtual hard disks as files, then we won’t have many requirements to worry about. We can grow both VHD and VHDX files easily. We can shrink VHDX files fairly easily. Shrinking VHD requires more effort. This article primarily focuses on growth operations, so I’ll wrap up with a link to a shrink how-to article.

You can resize any of Hyper-V’s three layout types (fixed, dynamically expanding, and differencing). However, you cannot resize an AVHDX file (a differencing disk automatically created by the checkpoint function).

If a virtual hard disk belongs to a virtual machine, the rules change a bit.

  • If the virtual machine is Off, any of its disks can be resized (in accordance with the restrictions that we just mentioned)
  • If the virtual machine is Saved or has checkpoints, none of its disks can be resized
  • If the virtual machine is Running, then there are additional restrictions for resizing its virtual hard disks

Can I Resize a Hyper-V Virtual Machine’s Virtual Hard Disks Online?

A very important question: do you need to turn off a Hyper-V virtual machine to resize its virtual hard disks? The answer: sometimes.

  • If the virtual disk in question is the VHD type, then no, it cannot be resized online.
  • If the virtual disk in question belongs to the virtual IDE chain, then no, you cannot resize the virtual disk while the virtual machine is online.
  • If the virtual disk in question belongs to the virtual SCSI chain, then yes, you can resize the virtual disk while the virtual machine is online.

rv_idevscsi

Does Online VHDX Resize Work with Generation 1 Hyper-V VMs?

The generation of the virtual machine does not matter for virtual hard disk resizing. If the virtual disk is on the virtual SCSI chain, then you can resize it online.

Does Hyper-V Virtual Disk Resize Work with Linux Virtual Machines?

The guest operating system and file system do not matter. Different guest operating systems might react differently to a resize event, and the steps that you take for the guest’s file system will vary. However, the act of resizing the virtual disk does not change.

Do I Need to Connect the Virtual Disk to a Virtual Machine to Resize It?

Most guides show you how to use a virtual machine’s property sheet to resize a virtual hard disk. That might lead to the impression that you can only resize a virtual hard disk while a virtual machine owns it. Fortunately, you can easily resize a disconnected virtual disk. Both PowerShell and the GUI provide suitable methods.

How to Resize a Virtual Hard Disk with PowerShell

PowerShell is the preferred method for all virtual hard disk resize operations. It’s universal, flexible, scriptable, and, once you get the hang of it, much faster than the GUI.

The cmdlet to use is Resize-VHD. As of this writing, the documentation for that cmdlet says that it operates offline only. Ignore that. Resize-VHD works under the same restrictions outlined above.

The VHDX that I used in the sample began life at 20GB. Therefore, the above cmdlet will work as long as I did at least one of the following:

  • Left it unconnected
  • Connected it to the VM’s virtual SCSI controller
  • Turned the connected VM off

Notice the gb suffix on the SizeBytes parameter. PowerShell natively provides that feature; the cmdlet itself has nothing to do with it. PowerShell will automatically translate suffixes as necessary. Be aware that 1kb equals 1,024, not 1,000 (although both b and B both mean “byte”).

Had I used a number for SizeBytes smaller than the current size of the virtual hard disk file, I might have had some trouble. Each VHDX has a specific minimum size dictated by the contents of the file. See the discussion on shrinking at the end of this article for more information. Quickly speaking, the output of Get-VHD includes a MinimumSize field that shows how far you shrink the disk without taking additional actions.

This cmdlet only affects the virtual hard disk’s size. It does not affect the contained file system(s). That’s a separate step.

How to Resize a Disconnected Virtual Hard Disk with Hyper-V Manager

Hyper-V Manager allows you to resize a virtual hard disk whether or not a virtual machine owns it.

  1. From the main screen of Hyper-V Manager, first, select a host in the left pane. All VHD/X actions are carried out by the hypervisor’s subsystems, even if the target virtual hard disk does not belong to a specific virtual machine. Ensure that you pick a host that can reach the VHD/X. If the file resides on SMB storage, delegation may be necessary.
  2. In the far right Actions pane, click Edit Disk.
    rv_actionseditdisk
  3. The first page is information. Click Next.
  4. Browse to (or type) the location of the disk to edit.
    rv_browse
  5. The directions from this point are the same as for a connected disk, so go to the next section and pick up at step 6.

Note: Even though these directions specify disconnected virtual hard disks, they can be used on connected virtual disks. All of the rules mentioned earlier apply.

How to Resize a Virtual Machine’s Virtual Hard Disk with Hyper-V Manager

Hyper-V Manager can also resize virtual hard disks that are attached to virtual machines.

  1. If the virtual hard disk is attached to the VM’s virtual IDE controller, turn off the virtual machine. If the VM is saved, start it.
  2. Open the virtual machine’s Settings dialog.
  3. In the left pane, choose the virtual disk to resize.
  4. In the right pane, click the Edit button in the Media block.
    rv_vmsettingsedit
  5. The wizard will start by displaying the location of the virtual hard disk file, but the page will be grayed out. Otherwise, it will look just like the screenshot from step 4 of the preceding section. Click Next.
  6. Choose to Expand or Shrink (VHDX only) the virtual hard disk. If the VM is off, you will see additional options. Choose the desired operation and click Next.
    rv_exorshrink
  7. If you chose Expand, it will show you the current size and give you a New Size field to fill in. It will display the maximum possible size for this VHD/X’s file type. All values are in GB, so you can only change in GB increments (use PowerShell if that’s not acceptable).
    rv_expandIf you chose Shrink (VHDX only), it will show you the current size and give you a New Size field to fill in. It will display the minimum possible size for this file, based on the contents. All values are in GB, so you can only change in GB increments (use PowerShell if that’s not acceptable).
    rv_expand
    Enter the desired size and click Next.
  8. The wizard will show a summary screen. Review it to ensure accuracy. Click Finish when ready.

The wizard will show a progress bar. That might happen so briefly that you don’t see it, or it may take some time. The variance will depend on what you selected and the speed of your hardware. Growing fixed disks will take some time; shrinking disks usually happens almost instantaneously. Assuming that all is well, you’ll be quietly returned to the screen that you started on.

Following Up After a Virtual Hard Disk Resize Operation

When you grow a virtual hard disk, only the disk’s parameters change. Nothing happens to the file system(s) inside the VHD/X. For a growth operation, you’ll need to perform some additional action. For a Windows guest, that typically means using Disk Management to extend a partition:

rv_extend

Note: You might need to use the Rescan Disks operation on the Action menu to see the added space.

Of course, you could also create a new partition (or partitions) if you prefer.

I have not performed this operation on any Linux guests, so I can’t tell you exactly what to do. The operation will depend on the file system and the tools that you have available. You can probably determine what to do with a quick Internet search.

VHDX Shrink Operations

I didn’t talk much about shrink operations in this article. Shrinking requires you to prepare the contained file system(s) before you can do anything in Hyper-V. You might find that you can’t shrink a particular VHDX at all. Rather than muddle this article will all of the necessary information, I’m going to point you to an earlier article that I wrote on this subject. That article was written for 2012 R2, but nothing has changed since then.

What About VHD/VHDX Compact Operations?

I often see confusion between shrinking a VHD/VHDX and compacting a VHD/VHDX. These operations are unrelated. When we talk about resizing, then the proper term for reducing the size of a virtual hard disk is “shrink”. “Compact” refers to removing the zeroed blocks of a dynamically expanding VHD/VHDX so that it consumes less space on physical storage. Look for a forthcoming article on that topic.

Moving to a new home…

This is the last blog post that I am going to write as Virtual PC Guy. But do not fear, I am starting a new blog over at american-boffin.com, and all the Virtual PC Guy posts are going to remain intact.

You may wonder why I am making this change?

Well, there are several reasons.

  • It’s been a long time. I have written 1399 blog posts over 14 years – averaging one new post every other working day. When I started this blog, I had more hair and fewer children.
  • The world has changed. When I started writing as Virtual PC Guy, virtualization was a new and unknown technology. Cloud computing was not even invented yet. It is amazing to think about how far we have come!
  • The scope and impact of my work has drastically increased. When I started blogging, there were a very select group of people who cared about virtualization. Now, between cloud computing, the rise of virtualization and containerization as standard development tools – and the progress we have been making on delivering virtualization based security for all environments – more and more people are affected by my work.
  • I am a manager now. When I started on this blog I was a frontline program manager – and most of my time was spent thinking about and designing technology. I have been a program manager lead for almost a decade now – and while I do still spend a lot of time working with technology – I spend more time working with people.
  • Maintaining multiple blogs is hard. I have tried, from time to time, to start up separate blogs for different parts of my life. But maintaining a blog is a lot of work. Maintaining multiple blogs is just too much work for me.
  • Virtual PC Guy has a very distinctive style. Over the years I have toyed with the idea of switching up the style of Virtual PC Guy – but I have never been able to bring myself to do it.

For all these reasons – I have decided that the best thing to do would be to archive Virtual PC Guy (I have posts that are 10 years old and are still helping people out!) and to start a new blog.

On my new blog – I will still talk about technology – but I will also spend time talking about working with customers, working with people in a corporate environment, and about whatever hobbies I happen to be spending my time on.

I hope you come and join me on my new blog!

Cheers,
Ben

Major Hyper-V Developments in 2017 (& what lies ahead in 2018)

It’s that time of year again. I’m of course talking about that time of year when most IT Pros are huddled in their homes spending time with family and hopefully not beating their heads against servers that went down over the holidays. While I should honestly be doing the same, I always like to reflect on things around the holidays. I find it helpful to sum up the year we’ve had and to also think about what lies ahead in the coming new year.

We’ve had an amazing year here on the Altaro Hyper-V Hub. We’re so thankful to you, our readers, for your support. There have been a ton of new exciting technologies and features to talk about, and there is much more to come. While Hyper-V isn’t as “front and center” as it once was (more on this later), it remains a core piece of the Windows Server stack, and as such we’ll have much to talk about over the coming 12 months. However, before we talk about that, let’s take a little bit of time and reflect on where we’ve been.

Hyper-V logo

Hyper-V in 2017 – A Year-End Review

2017 was an interesting year overall in the Microsoft Space. I attended many Microsoft focused events over the last 12 months, including Inspire and Ignite, and I now finally have the sense that the cloud is truly taking things over. That is maybe not in the sense that every workload will reside in a public cloud, but more in the sense that every workload out there will be hosted in some sort of cloud, whether it’s public, hybrid or private. Sure, people have been saying that for the last several years, but this is the first year where I felt there was no denying it. Yes, people are still running on-prem, and there will be those that are running on-prem for a long time yet, but Microsoft’s Cloud Strategy is coming in hot and it’s becoming easier and more economical than ever to run things in a cloud, and Microsoft Azure Stack is one of the drivers of this change.

Azure Stack is in the House

Again, one of the key pieces of technology that is leading the charge in this movement is Microsoft Azure Stack (MAS). We’ve talked about Azure Stack in a few articles over the past months and we get a lot of comments stating it has nothing to do with Hyper-V. Well the folks saying that are both right and wrong. They’re wrong in the sense that Hyper-V is in the stack and is a core piece of it. Yet, they are right at the same time because Hyper-V is becoming so abstracted in some of these new platforms that we no longer have direct access to it. the ARM management layer is used in Azure Stack to interface with Hyper-V and many of the decisions and configurations that we would take on ourselves are simply automated and taken care of according to best practices. While it does amount to some loss of control, what this really allows us to do is provision workloads faster, and it has the added benefit of being a deployment mechanism that is the same as it is on public Azure.

Take those ideas and now imagine a world where you have large enterprises, and regional service providers hosting their own Azure Stack instances. Now you literally have Azure and its management stack everywhere. Enterprise companies will use their local Azure Stack, and the SMB and SME will start a move to regionally hosted Azure Stack instances if they aren’t comfortable with the public cloud. Again, you’ll have the organizations that will still run on-prem, but the option for cloud will be everywhere, and that is really what Microsoft has done with Azure and MAS in 2017.

2017: The Year of the Container?

If I think about other disruptive technologies from over the last 12 months, another one that sticks out to me is containers. Containers have been around for some time in the form of docker and the open source community, but Microsoft has somewhat driven them into the mainstream this calendar year. If you’re a Hyper-V admin and you’re not familiar with containers, you should take a look. You can think of a container as an ultra-slim VM. It contains your application and just the supporting software needed for said app. The container then relies on the underlying host OS for OS related functions, so they aren’t as isolated as traditional VMs, but due to their makeup you can get much higher density of containers per host than VMs.

I attended a session (recording below) at Microsoft Ignite regarding containers. MetLife, a large insurance company here in the states had reduced a significant chunk of their infrastructure by moving over the containers, and all I could think of was: “I’m watching what VMs did to physical machines all those years, all over again!”. It was that moment where I saw that containers weren’t going away, and they and server-less computing are here, now.

[embedded content]
Finally, I also had the pleasure of talking with Taylor Brown who heads up containers at Microsoft. I met with Taylor at Ignite and we talked about what is new for containers in 2017 and where he sees them going in the future. You can view the video below

[embedded content]

Hyper-V Continues to Get Enhancements

This wouldn’t be a year-end review article on a Hyper-V blog without some mention of Hyper-V right? I’ve been talking about Containers and Azure Stack, but Hyper-V saw some big improvement this year, the largest of which I think is Project Honolulu.

hono_vminventory

Project Honolulu is the new upcoming management experience for on-premise workloads, including Hyper-V and it’s supporting infrastructure. The community has been asking Microsoft for an updated on-prem management experience for years, and they have delivered. While relatively new and web-based, Honolulu is a strong utility for so young a management tool, and if you haven’t had a chance to check out Project Honolulu yet, I highly encourage you to check out Eric’s article on the subject!

Other notable enhancements are:

What Comes Next in 2018?

Looking ahead is always subjective right? I won’t pretend to be the definitive expert on what happens in the Microsoft space over the next 12 months. However, I have been in this industry and in Microsoft circles long enough to have a pretty good gut feeling about where things are headed. So where do I think we’re headed?

Microsoft Will Continue to Push Cloud

This one is a no-brainer right? Microsoft will continue to push Azure adoption sure, but they will also be enabling service providers and enterprises to use Microsoft Azure Stack in an economical way. Like most new Microsoft technologies MAS was intended for the huge enterprise, and is now in a place where it can be scaled down to 4 nodes. This makes the entry point more reachable to more organizations and will enable the proliferation of that Azure anywhere idea I was talking about earlier. Don’t want to use public Azure? No problem! You’re about to have a whole lot of new options for where you place that “cloud” workload!

The Recent Net-Neutrality Vote Could Impact the Cloud

If you have eyes and ears, you’ve likely heard that the FCC stripped Net Neutrality of its title 2 protections. With these protections repealed, ISPs now have the ability to tinker with, and charge more for certain types of traffic. As such, with so much moving to the cloud, I could potentially see a world where you need to buy the “Public Computing” package with your ISP to reach public cloud services such as Azure. As such, this could limit Azure’s explosive growth to date, depending on the amount of tinkering done by ISPs.

More options if you run VMware

Microsoft and VMware really have an on again/off again relationship. You likely heard the news that Microsoft will provide the ability to run VMware workloads in Microsoft Azure. After the initial announcement came out, VMware stated they would deny support to such deployments, but I saw an article just the other day that they’re suddenly fine with it now. Whatever the outcome, it looks like we’re going to live in a world we’re you can run VMware workloads natively inside of Azure, which will ultimately lead to more Azure service adoption. Not surprising!

Hyper-V will Continue to be Abstracted

With containers, Azure Stack and server-less computing, Hyper-V is becoming so abstracted these days, that we don’t even have access to it in some cases. I’ll be blunt. The hypervisor has become commoditized. It’s no longer this niche, disruptive technology. As a result I think you’ll start to see some skill-sets moving away from Hyper-V and onto other technologies and platforms that contain Hyper-V under the hood, such as MAS and Azure.

Our Top 10 Posts from 2017

To finish up our final 2017 article, I’d like to post a list. As I said, we had a great year on the Hyper-V blog, and we had a number of popular posts. We’ve compiled a list of those posts below for your reading pleasure. Enjoy!

Wrap-Up

With that, I can wrap up our 2017! We’ve enjoyed our year here at Altaro, and we’re looking forward to another fantastic year of providing Hyper-V howtos, breaking news, and more community focused content!

Any 2017 highlights or thoughts for the future you’d like to share that weren’t mentioned? Feel free to use the comments section below to let us know.

Happy New Year!

The Really Simple Guide to Hyper-V Networking

If you’re just getting started with Hyper-V and struggling with the networking configuration, you are not alone. I (and others) have written a great deal of introductory material on the subject, but sometimes, that’s just too much. I’m going to try a different approach. Rather than a thorough deep-dive on the topic that tries to cover all of the concepts and how-to, I’m just going to show you what you’re trying to accomplish. Then, I can just link you to the necessary supporting information so that you can make it into reality.

Getting Started

First things first. If you have a solid handle on layer 2 and layer 3 concepts, that’s helpful. If you have experience networking Windows machines, that’s also helpful. If you come to Hyper-V from a different hypervisor, then that knowledge won’t transfer well. If you apply ESXi networking design patterns to Hyper-V, then you will create a jumbled mess that will never function correctly or perform adequately.

Your Goals for Hyper-V Networking

You have two very basic goals:

  1. Ensure that the management operating system can communicate on the network
  2. Ensure that virtual machines can communicate on the network

rsn_goals

Any other goals that you bring to this endeavor are secondary, at best. If you have never done this before, don’t try to jump ahead to routing or anything else until you achieve these two basic goals.

Hyper-V Networking Rules

Understand what you must, can, and cannot do with Hyper-V networking:

What the Final Product Looks Like

It might help to have visualizations of correctly-configured Hyper-V virtual switches. I will only show images with a single physical adapter. You can use a team instead.

Networking for a Single Hyper-V Host, the Old Way

An old technique has survived from the pre-Hyper-V 2012 days. It uses a pair of physical adapters. One belongs to the management operating system. The other hosts a virtual switch that the virtual machines use. I don’t like this solution for a two adapter host. It leaves both the host and the virtual machines with a single point of failure. However, it could be useful if you have more than two adapters and create a team for the virtual machines to use. Either way, this design is perfectly viable whether I like it or not.

rsn_vswitch_split

Networking for a Single Hyper-V Host, the New Way

With teaming, you can just join all of the physical adapters together and let it host a single virtual switch. Let the management operating system and all of the guests connect through it.

rsn_vswitch_unified

Networking for a Clustered Hyper-V Host

For a stand-alone Hyper-V host, the management operating system only requires one connection to the network. Clustered hosts benefit from multiple connections. Before teaming was directly supported, we used a lot of physical adapters to make that happen. Now we can just use one big team to handle our host and our guest traffic. That looks like this:

rns_vswitch_cluster

VLANs

VLANs seem to have some special power to trip people up. A few things:

  • The only purpose of a VLAN is to separate layer 2 (Ethernet) traffic.
  • VLANs are not necessary to separate layer 3 (IP) networks. Many network administrators use VLANs to create walls around specific layer 3 networks, though. If that describes your network, you will need to design your Hyper-V hosts to match. If your physical network doesn’t use VLANs, then don’t worry about them on your Hyper-V hosts.
  • Do not create one Hyper-V virtual switch per VLAN the way that you configure ESXi. Every Hyper-V virtual switch automatically supports untagged frames and VLANs 1-4096.
  • Hyper-V does not have a “default” VLAN designation.
  • Configure VLANs directly on virtual adapters, not on the virtual switch.

Other Quick Pointers

I’m going to provide you with some links so you can do some more reading and get some assistance with configuration. However, some quick things to point out:

  • The Hyper-V virtual switch does not have an IP address of its own.
  • You do not manage the Hyper-V virtual switch via an IP or management VLAN. You manage the Hyper-V virtual switch using tools in the management or a remote operating system (Hyper-V Manager, PowerShell, and WMI/CIM).
  • Network connections for storage (iSCSI/SMB): Preferably, network connections for storage will use dedicated, unteamed physical adapters. If you can’t do that, then you can create dedicated virtual NICs in the management operating system
  • Multiple virtual switches: Almost no one will ever need more than one virtual switch on a Hyper-V host. If you have VMware experience, especially do not create virtual switches just for VLANs.
  • The virtual machines’ virtual network adapters connect directly to the virtual switch. You do not need anything in the management operating system to assist them. You don’t need a virtual adapter for the management operating system that has anything to do with the virtual machines.
  • Turn off VMQ for every gigabit physical adapter that will host a virtual switch. If you team them, the logical team NIC will also have a VMQ setting that you need to disable.

For More Information

I only intend for this article to be a quick introduction to show you what you’re trying to accomplish. We have several articles to help you dive into the concepts and the necessary steps for configuration.

How to Monitor Hyper-V Performance using PNP4Nagios

At a high level, you need three things to run a trouble-free datacenter (even if your datacenter consists of two mini-tower systems stuffed in a closet): intelligent architecture, monitoring, and trend analysis. Intelligent architecture consists of making good purchase decisions and designing virtual machines that can appropriately handle their load. Monitoring allows you to prevent or respond quickly to emergent situations. Trend analysis helps you to determine how well your reality matches your projections and greatly assists in future architectural decisions. In this article, we’re going to focus on trend analysis. We will set up a data collection and graphing system called “PNP4Nagios” that will allow you to track anything that you can measure. It will hold that data for four years. You can display it in graphs on demand.

What You Get

I know that intro was a little heavy. So, to put it more simply, I’m giving you graphs. Want to know how much CPU that VM has been using? Trying to figure out how quickly your VMs are filling up your Cluster Shared Volumes? Curious about a VM’s memory usage? We have all of that.

Where I find it most useful: Getting rid of vendor excuses. We all have at least one of those vendors that claim that we’re not providing enough CPU or memory or disk or a combination. Now, you can visually determine the reasonableness of their demands.

First, the host and service screens in Nagios will get a new graph icon next to every host and service that track performance data. Also, hovering over one of those graph icons will show a preview of the most recent chart:

p4n_mainscreen

Second, clicking any of those icons will open a new tab with the performance data graph for the selected item.

p4n_chartpage

Just as the Nagios pages periodically refresh, the PNP4Nagios page will update itself.

Additionally, you can do the following:

  • Click-dragging a section on a graph will cause it to zoom. If you’ve ever used the zoom feature in Performance Monitor, this is similar.
  • In the Actions bar, you can:
    • Set a custom time/date range to graph
    • Generate a PDF of the visible charts
    • Generate XML summary data
  • Create a “basket” of the graphs that you view most. The basket persists between sessions, so you can build a dashboard of your favorite charts

What You Need

Fortunately, you don’t need much to get going with PNP4Nagios.

Fiscal Cost

Let’s answer the most important question: what does it cost? PNP4Nagios does not require you to purchase anything. Their site does include a Donate button. If your organization finds PNP4Nagios useful, it would be good to throw a few dollars their way.

You’ll need an infrastructure to install PNP4Nagios on, of course. We’ll wrap that up into the later segments.

Nagios

As its name implies, PNP4Nagios needs Nagios. PNP4Nagios installs alongside Nagios on the same system. We have a couple of walkthroughs for installing Nagios as a Hyper-V guest, divided by distribution.

The installation really doesn’t change much between distributions. The differences lie in how you install the prerequisites and in how you configure Apache. If you know those things about your distribution, then you should be able to use either of the two linked walkthroughs to great effect. If you’d rather see something on your exact distribution, the official Nagios project has stepped up its game on documentation. If we haven’t got instructions for your distribution, maybe they do. There are still things that I do differently, but nothing of critical importance. Also, being a Hyper-V blog, I have included special items just for monitoring Hyper-V, so definitely look at the post-installation steps of my articles.

Also, if you want to use SSL and Active Directory to secure your Nagios installation, we’ve got an article for that.

Disk Space

According to the PNP4Nagios documentation, each item that you monitor will require about 400 kilobytes once it has reached maximum data retention. That assumes that you will leave the default historical interval and retention lengths. More information can be found on the PNP4Nagios site. So, 20 systems with 12 monitors apiece will use about 96 megabytes.

PNP4Nagios itself appears to use around 7 megabytes once installed and extracted.

Downloading PNP4Nagios

PNP4Nagios is distributed on Sourceforge: https://sourceforge.net/projects/pnp4nagios/files/latest/download.

As always, I recommend that you download to a standard workstation and then transfer the files to the Nagios server. Since I operate using a Windows PC and run Nagios on a Linux system, WinSCP is my choice of transfer tool.

On my Linux systems, I create a “Download” directory in my home folder and place everything there. The install portion of my instructions will be written using the file’s location as a starting point. So, for me, I begin with
cd ~/Downloads.

Installing PNP4Nagios

PNP4Nagios installs quite easily.

PNP4Nagios Prerequisites

Most of the prerequisites for PNP4Nagios automatically exist in most Linux distributions. Most of the remainder will have been satisfied when you installed Nagios. The documentation lists them: http://docs.pnp4nagios.org/pnp-0.6/about#required_software.

  • Perl, at least version 5. To check your installed Perl version:
    perl v
  • RRDTool: This one will not be installed automatically or during a regular Nagios build. Most distributions include it in their mainstream repositories. Install with your distribution’s package manager.
    • CentOS and most other RedHat-based distributions:
      sudo yum install perlrrdtool
    • SUSE-based systems:
      sudo zypper install rrdtool
    • Ubuntu and most other Debian-based distributions:
      sudo apt install rrdtool librrdsperl
  • PHP, at least version 5. This would have been installed with Nagios. Check with:
    php v
  • GD extension for PHP. You might have installed this with Nagios. Easiest way to check is to just install it; it will tell you if you’ve already got it.
    • CentOS and most other RedHat-based distributions:
      sudo yum install phpgd
    • SUSE-based systems:
      sudo zypper install phpgd
    • Ubuntu and most other Debian-based distributions:
      sudo apt install phpgd
  • mod_rewrite extension for Apache. This should have been installed along with Nagios. How you check depends on whether your distribution uses “apache2” or “httpd” as the name of the Apache executable:
    • CentOS and most other RedHat-based distributions:
      sudo httpd M | grep rewrite
    • Ubuntu, openSUSE, and most Debian and SUSE distributions:
      sudo apache2ctl M | grep rewrite
  • There will be a bit more on this in the troubleshooting section near the end of the article, but if you’re running a more current version of PHP (like 7), then you may not have the XML extension built-in. I only ran into this problem on my Ubuntu installation. I solved it with this:
    sudo apt install phpxml
  • openSUSE was missing a couple of PHP modules on my system:
    sudo zypper install phpsockets phpzlib

If you are missing anything that I did not include instructions for, you can visit one of my articles on installing Nagios. If I haven’t got one for your distribution, then you’ll need to search for instructions elsewhere.

Unpacking and Installing PNP4Nagios

As I mentioned in the download section, I place my downloaded files in ~/Downloads. I start from there (with
cd ~/Downloads). Start these directions in the folder where you placed your downloaded PNP4Nagios tarball.

  1. Unpack the tarball. I wrote these directions with version 0.6.26. Modify your command as necessary (don’t forget about tab completion!):
    tar xzf pnp4nagios0.6.26.tar.gz
  2. Move to the unpacked folder:
    cd ./pnp4nagios0.6.26/
  3. Next, you will need to configure the installer. Most of us can just use it as-is. Some of us will need to override some things, such as the Nagios user groups. To determine if that applies to you, open /usr/local/nagios/etc/nagios.cfg. Look for the following section:



    If both nagios_user and nagios_group are “nagios”, then you don’t need to do anything special.
    Regular configuration:
    ./configure
    Configuration with overrides:
    ./configure withnagiosuser=naguser withnagiosgroup=nagcmd .
    Other overrides are available. You can view them all with
    ./configure help. One useful override would be to change the location of the emitted perfdata files to an auxiliary volume to control space usage. On my Ubuntu system, I needed to override the location of the Apache conf files:
    ./configure withhttpdconf=/etc/apache2/sitesavailable

  4. When configure completes, check its output. Verify that everything looks OK. Especially pay attention to “Apache Config File” — note the value because you will access it later. If anything looks off, install any missing prerequisites and/or use the appropriate configure options. You can continue running ./configure until everything suits your needs.
  5. Compile the program:
    make all. If you have an “oh no!” moment in which you realize that you missed something, you can still re-run ./configure and then compile again.
  6. Because we’re doing a new installation, we will have it install everything:
    sudo make fullinstall. Be aware that we are now using sudo. That’s because it will need to copy files into locations that your regular account won’t have access to. For an upgrade, you’d likely only want
    sudo make install. Please check the documentation for additional notes about upgrading. If you didn’t pay attention to the output file locations during configure, they’ll be displayed to you again.
  7. We’re going to be adding a bit of flair to our Nagios links. Enable the pop-up extension with:
    sudo cp ./contrib/ssi/statusheader.ssi /usr/local/nagios/share/ssi/

Installation is complete. We haven’t wired it into Nagios yet, so don’t expect any fireworks.

Configure Apache Security for PNP4Nagios

If you just use the default Apache security for Nagios, then you can skip this whole section. As outlined in my previous article, I use Active Directory authentication. Really, all that you need to do is duplicate your existing security configuration to the new site. Remember how I told you to pay attention to the output of configure, specifically “Apache Config File”? That’s the file to look in.

My “fixed” file looks like this:

Only a single line needed to be changed to match my Nagios virtual directories.

Initial Verification of PNP4Nagios Installation

Before we go any further, let’s ensure that our work to this point has done what we expected.

  1. If you are using a distribution whose Apache enables and disables sites by symlinking into sites-available and you instructed PNP4Nagios to place its files there (ex: Ubuntu), enable the site:
    sudo a2ensite pnp4nagios.conf
  2. Restart Apache.
    1. CentOS and most other RedHat-based distributions:
      sudo service httpd restart
    2. Almost everyone else:
      sudo service apache2 restart
  3. If necessary, address any issues with Apache starting. For instance, Apache on my openSUSE box really did not like the “Order” and “Allow” directives.
  4. Once Apache starts correctly, access http://yournagiosserveraddress/pnp4nagios. For instance, my internal URL is http://nagios.siron.int/pnp4nagios. Remember that you copied over your Nagios security configuration, so you will log in using the same credentials that you use on a normal Nagios site.
  5. Fix any problems indicated by the web page. Continue reloading the Apache server and the page as necessary until you get the green light:
    p4n_greenlight
  6. Remove the file that validates the installation:
    sudo rm /usr/local/pnp4nagios/share/install.php

Installation was painless on my CentOS and Ubuntu systems. openSUSE gave me more drama. In particular, it complained about “PHP zlib extension not available” and “PHP socket extension not available”. Very easy to fix:
sudo zypper install phpsockets phpzlib. Don’t forget to restart Apache after making these changes.

Initial Configuration of Nagios for PNP4Nagios

At this point, you have PNP4Nagios mostly prepared to do its job. However, if you try to access the URL, you’ll get a message that says that it doesn’t have any data: “perfdata directory “/usr/local/pnp4nagios/var/perfdata/” is empty. Please check your Nagios config.” Nagios needs to start feeding it data.

We start by making several global changes. If you are comparing my walkthrough to the official PNP4Nagios documentation, be aware that I am guiding you to a Bulk + NPCD configuration. I’ll talk about why after the how-to.

Global Nagios Configuration File Changes

In the text editor of your choice, open /usr/local/nagios/etc/nagios.cfg. Find each of the entries that I show in the following block and change them accordingly. Some don’t need anything other than to be uncommented:

Next, open /usr/local/nagios/etc/objects/templates.cfg. At the end, you’ll find some existing commands that mention “perfdata”. After those, add the commands from the following block. If you don’t use the initial Nagios sample files, then just place these commands in any active cfg file that makes sense to you.

Configuring NPCD

The performance collection method that we’re employing involves the Nagios Perfdata C Daemon (NPCD). The default configuration will work perfectly for this walkthrough. If you need something more from it, you can edit /usr/local/pnp4nagios/etc/npcd.cfg. We just want it to run as a daemon:

Enable it to run automatically at startup.

  • Most Red Hat and SUSE based distributions:
    sudo chkconfig add npcd
  • Ubuntu and most other Debian-based distributions:
    sudo updaterc.d npcd defaults

Configuring Hosts in Nagios for PNP4Nagios Graphing

If you made it here, you’ve successfully completed all the hard work! Now you just need to tell Nagios to start collecting performance data so that PNP4Nagios can graph it.

Note: I deviate substantially from the PNP4Nagios official documentation. If you follow those directions, you will quickly and easily set up every single host and every single service to gather data. I didn’t want that because I don’t find such a heavy hand to be particularly useful. You’ll need to do more work to exert finer control. In my opinion, that extra bit of work is worth it. I’ll explain why after the how-to.

If you followed the path of least resistance, every single host in your Nagios environment inherits from a single root source. Open /usr/local/nagios/etc/objects/templates.cfg. Find the define host object with a name of generic-host. Most likely, this is your master host object. Look at its configuration:

Now that you’ve enabled performance data processing in nagios.cfg, this means that Nagios and PNP4Nagios will now start graphing for every single host in your Nagios configuration. Sound good? Well, wait a second. What it really means is that it will graph the output of the check_command for every single host in your Nagios configuration. What is check_command in this case? Probably check_ping or check_icmp. The performance data that those output are the round-trip average and packets lost during pings from the Nagios server to the host in question. Is that really useful information? To track for four years?

I don’t really need that information. Certainly not for every host. So, I modified mine to look this:

What we have:

  • Our existing hosts are untouched. They’ll continue not recording performance data just as they always have.
  • A new, small host definition called “perf-host”. It also does not set up the recording of host performance data. However, its “action_url” setting will cause it to display a link to any graphs that belong to this host. You can use this with hosts that have graphed services but you don’t want the ping statistics tracked. To use it, you would set up/modify hosts and host templates to inherit from this template in addition to whatever host templates they already inherit from. For example:
    use perfhost,generichost.
  • A new, small host definition called “perf-host-pingdata”. It works exactly like “perf-host” except that it will capture the ping data as well. The extra bit on the end of the “action_url” will cause it to draw a little preview when you mouseover the link. To use it, you will set up/modify hosts and host templates to inherit from this template in addition to whatever host templates they already inherit from. For example:
    use perfhostpingdata,generichost.

Note: When setting the inheritance:

  • perf-host or perf-host-pingdata must come before any other host templates in a use line.
  • In some instances, including a space after the comma in a use line causes Nagios to panic if the name of the host does not also have a space (ex: you are using tabs instead of spaces on the
    name generic_host line. Make sure that all of your use directives have no spaces after any commas and you will never have a problem. Ex:
    use perfhost,generichost.

Remember to check the configuration and restart Nagios after any changes to the .cfg files:

Couldn’t You Just Set a Single Root Host for Inheritance?

An alternative to the above would be:

In this configuration, perf-host inherits directly from generic-host. You could then have all of your other systems inherit from perf-host instead of generic-host. The problem is that even in a fairly new Nagios installation, a fair number of hosts already inherit from generic-host. You’d need to determine which of those you wanted to edit and carefully consider how inheritance works. If you’re going to all of that trouble, it seems to me that maybe you should just directly edit the generic-host template and be done with it.

Truthfully, I’m only telling you what I do. Do whatever makes sense to you.

Configuring Services in Nagios for PNP4Nagios Graphing

You’ll get much more use of out service graphing than host graphing. Just as with hosts, the default configuration enables performance graphing for all services. Not all services emit performance data, and you may not want data from all services that do produce data. So, let’s fine-tune that configuration as well.

Still in /usr/local/nagios/etc/objects/templates.cfg, find the define service object with a name of generic-service. Disable performance data collection on it and add a stub service that enables performance graphing:

When you want to capture performance data from a service, prepend the new stub service to its use line. Ex:
use perfservice,genericservice. The warnings from the host section about the order of items and the lack of a space after the comma in the use line transfer to the service definition.

Remember to check the configuration and restart Nagios after any changes to the .cfg files:

Example Configurations

In case the above doesn’t make sense, I’ll show you what I’m doing.

Most of the check_nt services emit performance data. I’m especially interested in CPU, disk, and memory. The uptime service also emits data, but for some reason, it doesn’t use the defined “counter” mode. Instead, it’s just a graph that steadily increases at each interval until you reboot, then it starts over again at zero. I don’t find that terribly useful, especially since Nagios has its own perfectly capable host uptime graphs. So, I first configure the “windows-server” host to show the performance action_url. Then I configure the desired default Windows services to capture performance data.

My /usr/local/nagios/etc/objects/windows.cfg:

Now, my hosts that inherit from the default Windows template have the extra action icon, but my other hosts do not:

p4n_hostswithiconsThe same story on the services page; services that track performance data have an icon, but the others do not:

p4n_serviceswithicons

Troubleshooting your PNP4Nagios Deployment

Not getting any data? First of all, be patient, especially when you’re just getting started. I have shown you how to set up the bulk mode with NPCD which means that data captures and graphing are delayed. I’ll explain why later, but for now, just be aware that it will take some time before you get anything at all.

If it’s been some time, say, 15 minutes, and you’re still not getting any data. Go to verify.pnp4nagios.org/ and download the verify_pnp_config file. Transfer it to your Nagios host. I just plop it into my Downloads folder as usual. Navigate to the folder where you placed yours, then run:

That should give you the clues that you need to fix most any problems.

I did have one leftover problem, but only my Ubuntu system where I had updated to PHP 7. The verify script passed everything, but trying to load any PNP4Nagios page gave me this error: “Call to undefined function simplexml_load_file()”. I only needed to install the PHP XML package to fix that:
sudo apt install phpxml. I didn’t look up the equivalent on the other distributions.

Plugin Output for Performance Graphing

To determine if a plugin can be graphed, you could just look at its documentation. Otherwise, you’ll need to manually execute it from /usr/local/nagios/libexec. For instance, we’ll just use the first one that shows up on an Ubuntu system, check_apt:

p4n_testcheckoutput

See the pipe character (|) there after the available updates report? Then the jumble of characters after that? That’s all in the standard format for Nagios performance charting. That format is:

  1. A pipe character after the standard Nagios service monitoring result.
  2. A human-readable label. If the label includes any special characters, the entire label should be enclosed in single quotes.
  3. An equal sign (=)
  4. The reported value.
  5. Optionally, a unit of measure.
  6. A semi-colon, optionally followed by a value for the warning level. If the warning level is visible on the produced chart, it will be indicated by a horizontal yellow line.
  7. A semi-colon, optionally followed by a value for the critical level. If the warning level is visible on the produced chart, it will be indicated by a horizontal red line.
  8. A semicolon, optionally followed by the minimum value for the chart’s y-axis. Must be the same unit of measure as the value in #4. If not specified, PNP4Nagios will automatically set the minimum value. If this value would make the current value invisible, PNP4Nagios will set its own minimum.
  9. A semicolon, optionally followed by the maximum value for the chart’s y-axis. Must be the same unit of measure as the value in #4. If not specified, PNP4Nagios will automatically set the maximum value. If this value would make the current value invisible, PNP4Nagios will set its own maximum.

This format is defined by Nagios and PNP4Nagios conforms to it. You can read more about the format at: verify.pnp4nagios.org/

My plugins did not originally emit any performance data. I have been working on that and should hopefully have all of that work completed before you read this article.

My PNP4Nagios Configuration Philosophy

I had several decision points when setting up my system. You may choose to diverge as it meets your needs. I’ll use this section to explain why I made the choices that I did.

Why “Bulk with NPCD” Mode?

Initially, I tried to set up PNP4Nagios in “synchronous” mode. That would cause Nagios to instantly call on PNP4Nagios to generate performance data immediately after every check’s results were returned. I chose that initially because it seemed like the path of least resistance.

It didn’t work for me. I’m betting that I did something wrong. But, I didn’t get my problem sorted out. I found a lot more information on the NPCD mode. So, I switched. Then I researched the differences. I feel like I made the correct choice.

You can read up on the available modes yourself: http://docs.pnp4nagios.org/pnp-0.6/modes.

In synchronous mode, Nagios can’t do anything while PNP4Nagios processes the return information. That’s because it all occurs in the same thread; we call that behavior “blocking”. According to the PNP4Nagios documentation, that method “will work very good up to about 1,000 services in a 5-minute interval”. I assume that’s CPU-driven, but I don’t know. I also don’t know how to quantify or qualify “will work very good”. I also don’t know what sort of environments any of my readers are using.

Bulk mode moves the processing of data from per-return-of-results to gathering results for a while and then processing them all at once. The documentation says that testing showed that 2,000 services were processed in .06 seconds. That’s easier to translate to real-world systems, although I still don’t know the overall conditions that generated that benchmark.

When we add NPCD onto bulk mode, then we don’t block Nagios at all. Nagios still does the bulk gathering, but NPCD processes the data, not Nagios. I chose this method as it means that as long as your Nagios system is multi-core and not already overloaded, you should not encounter any meaningful interruption to your Nagios service by adding PNP4Nagios. It should also work well with most installation sizes. For really big Nagios/PNP4Nagios installations (also not qualified or quantified), you can follow their instructions on configuring “Gearman Mode”.

One drawback to this method: Your “4 Hour” charts will frequently show an empty space at the right of their charts. That’s because they will be drawn in-between collection/processing periods. All of the data will be filled in after a few minutes. You just may not have instant gratification.

Why Not Just Allow Every Host and Service to be Monitored?

The default configuration of PNP4Nagios results in every single host and every single service being enabled for monitoring. From an “ease-of-configuration” standpoint, that’s tempting. Once you’ve set the globals, you literally don’t have to do anything else.

However, we are also integrating directly with Nagios’ generated HTML pages. Whereas PNP4Nagios can determine that a service doesn’t have performance data because Nagios won’t have generated anything, the front-end just has an instruction to add a linked icon to every single service. So, if you just globally enable it, then you’ll get a lot of links that don’t work.

If you’re the only person using your environment, maybe that’s OK. But, if you share the environment, then you’ll start getting calls wanting to you to “fix” all those broken links. It won’t take long before you’re spending more time explaining (and re-explaining) that not all of the links have anything to show.

Why Not Just Change the Inheritance Tree?

If you want, you could have your performance-enabled hosts and services inherit from the generic-host/generic-service templates, then have later templates, hosts, and services inherit from those. If that works for you, then take that approach.

I chose to employ multiple inheritance as a way of overriding the default templates because it seemed like less effort to me. When I went to modify the services, I simply copied “perf-service,” to the clipboard and then selectively pasted it into the use line of every service that I wanted. It worked easier for me than a selective find-replace operation or manual replacement. It also seems to me that it would be easier to revert that decision if I make a mistake somewhere.

I can envision very solid arguments for handling this differently. I won’t argue. I just think that this approach was best for my situation.

Migrating local VM owner certificates for VMs with vTPM

Whenever I want to replace or reinstall a system which is used to run virtual machines with a virtual trusted platform module (vTPM), I’ve been facing a challenge: For hosts that are not part of a guarded fabric, the new system does need to be authorized to run the VM.
Some time ago, I wrote a blog post focused on running VMs with a vTPM on additional hosts, but the approach highlighted there does not solve everything when the original host is decommissioned. The VMs can be started on the new host, but without the original owner certificates, you cannot change the list of allowed guardians anymore.

This blog post shows a way to export the information needed from the source host and import it on a destination host. Please note that this technique only works for local mode and not for a host that is part of a guarded fabric. You can check whether your host runs in local mode by running Get-HgsClientConfiguration. The property Mode should list Local as a value.

Exporting the default owner from the source host

The following script exports the necessary information of the default owner (“UntrustedGuardian“) on a host that is configured using local mode. When running the script on the source host, two certificates are exported: a signing certificate and an encryption certificate.

Importing the UntrustedGuardian on the new host

On the destination host, the following snippet creates a new guardian using the certificates that have been exported in the previous step.

Please note that importing the “UntrustedGuardian” on the new host has to be done before creating new VMs with a vTPM on this host — otherwise a new guardian with the same name will already be present and the creation with the PowerShell snippet above will fail.

With these two steps, you should be able to migrate all the necessary bits to keep your VMs with vTPM running in your dev/test environment. This approach can also be used to back up your owner certificates, depending on how these certificates have been created.

Free Hyper-V Script: Virtual Machine Storage Consistency Diagnosis

<#

.SYNOPSIS

Verifies that a virtual machine’s files are all stored together.

.DESCRIPTION

Verifies that a virtual machine’s files are all stored together. Reports any inconsistencies in locations.

.PARAMETER VM

The virtual machine to check.

Accepts objects of type:

* String: A name of a virtual machine.

* VirtualMachine: An object from Get-VM

* System.GUID: A virtual machine ID. MUST be of type System.GUID to match.

* ManagementObject: A WMI object of type Msvm_ComputerSystem

* ManagementObject: A WMI object of type MSCluster_Resource

.PARAMETER ComputerName

The name of the computer that hosts the virtual machine to remove. If not specified, uses the local computer.

Ignored if VM is of type VirtualMachine or ManagementObject.

.PARAMETER DisksOnly

Set to true if you only care if data resides on different physical disks/LUNs.

Otherwise, a VM will be marked inconsistent if components exist in different folders.

.PARAMETER IgnoreVHDFolder

Set to true if you want to ignore the ‘Virtual Hard Disks’ subfolder for VHD/X files.

Example: If set, then VHDXs in C:VMsVirtual Hard Disks will be treated as though they are in C:VMs

Ignored when DisksOnly is set

.NOTES

Author: Eric Siron

Version 1.0

Authored Date: October 2, 2017

.EXAMPLE

Get-VMStorageConsistency -VM vm01

Reports the consistency of storage for the virtual machine named “vm01” on the local host.

.EXAMPLE

Get-VMStorageConsistency -VM vm01 -ComputerName hv01

Reports the consistency of storage for the virtual machine named “vm01” on the host named “vm01”.

.EXAMPLE

Get-VM | Get-VMStorageConsistency

Reports the consistency of storage for all local virtual machines.

.EXAMPLE

Get-VMStorageConsistency -VM vm01 -DisksOnly

Reports the consistency of storage for the virtual machine named “vm01” on the local host. Only checks that components reside on the same physical storage.

.EXAMPLE

Get-VMStorageConsistency -VM vm01 -IgnoreVHDFolder

Reports the consistency of storage for the virtual machine named “vm01” on the local host. If VHDXs reside in a Virtual Hard Disks subfolder, that will be ignored.

So, if the VM’s components are in \smbstoreVMs but the VHDXs are in \smbstoreVMsVirtual Hard Disks, the locations will be treated as consistent.

However, if the VM’s components are in \smbstoreVMsVirtual Machines while the VHDXs are in \smbstoreVMsVirtual Hard Disks, that will be inconsistent.

#>

#requires -Version 4

# function Get-VMStorageConsistency # Uncomment this line to use as a dot-sourced function or in a profile. Also next line and last line

#{ # Uncomment this line to use as a dot-sourced function or in a profile. Also preceding line and last line

[CmdletBinding()]

param(

[Parameter(Mandatory=$true, ValueFromPipeline=$true, Position=1)]

[Alias(‘VMName’, ‘Name’)]

[Object[]]

$VM,

[Parameter(Position = 2)][String]$ComputerName = $env:COMPUTERNAME,

[Parameter()][Switch]$DisksOnly,

[Parameter()][Switch]$IgnoreVHDFolder

)

BEGIN {

$ErrorActionPreference = [System.Management.Automation.ActionPreference]::Stop

Set-StrictMode -Version Latest

function New-LocationObject

{

<#

.SYNOPSIS

Defines/creates an object matching a VM’s component to its location.

#>

$LocationObject = New-Object -TypeName psobject

Add-Member -InputObject $LocationObject -MemberType NoteProperty -Name ‘Component’ -Value ([System.String]::Empty)

Add-Member -InputObject $LocationObject -MemberType NoteProperty -Name ‘Location’ -Value ([System.String]::Empty)

$LocationObject

}

function New-StorageConsistencyReport

{

<#

.SYNOPSIS

Defines/creates a VM’s storage consistency report object.

#>

$Report = New-Object -TypeName psobject

Add-Member -InputObject $Report -MemberType NoteProperty -Name ‘Name’ -Value ([System.String]::Empty)

Add-Member -InputObject $Report -MemberType NoteProperty -Name ‘ComputerName’ -Value ([System.String]::Empty)

Add-Member -InputObject $Report -MemberType NoteProperty -Name ‘VMId’ -Value ([System.String]::Empty)

Add-Member -InputObject $Report -MemberType NoteProperty -Name ‘Consistent’ -Value $false

Add-Member -InputObject $Report -MemberType NoteProperty -Name ‘Locations’ -Value @()

$Report

}

function Parse-Location

{

<#

.SYNOPSIS

Extracts the location information from a component’s path.

.PARAMETER Path

The path to parse

.PARAMETER DisksOnly

If specified, returns only the drive portion of the path. If a CSV is detected, returns the mount point name.

.PARAMETER TrimFile

If specified, assumes that Path includes a file name. Use with VHDXs and ISOs.

.PARAMETER IgnoreVHDFolder

If specified, will remove any trailing ‘Virtual Hard Disks’ subfolder

#>

param(

[Parameter()][String]$Path,

[Parameter()][bool]$DisksOnly,

[Parameter()][bool]$TrimFile,

[Parameter()][bool]$IgnoreVHDFolder

)

if ($DisksOnly)

{

if ($Path -match ‘([A-Za-z]:\ClusterStorage\.+?)(\|z)’)

{

$Path = $Matches[1]

}

else

{

$Path = [System.IO.Path]::GetPathRoot($Path)

}

}

else

{

if ($TrimFile)

{

$Path = [System.IO.Path]::GetDirectoryName($Path)

}

if ($IgnoreVHDFolder)

{

$Path = $Path -replace ‘\?Virtual Hard Disks\?$’,

}

}

$Path -replace ‘\$’,

}

function Process-Location

{

param(

[Parameter()][ref]$Report,

[Parameter()][String]$Component,

[Parameter()][String]$Location,

[Parameter()][bool]$DisksOnly,

[Parameter()][String]$RootLocation,

[Parameter()][bool]$TrimFile = $false,

[Parameter()][bool]$IgnoreVHDFolder = $false

)

$ThisLocation = New-LocationObject

$ThisLocation.Component = $Component

$ThisLocation.Location = $Location

$Report.Value.Locations += $ThisLocation

$CurrentObservedLocation = Parse-Location -Path $Location -DisksOnly $DisksOnly -TrimFile $TrimFile -IgnoreVHDFolder $IgnoreVHDFolder

if ($Report.Value.Consistent)

{

if ($CurrentObservedLocation -ne $RootLocation)

{

$Report.Value.Consistent = $false

Write-Verbose -Message (“VM {0} on {1} failed consistency on component {2}.`r`n`tRoot component location: {3}`r`n`t{2} location: {4}” -f $Report.Value.Name, $Report.Value.ComputerName, $Component, $RootLocation, $CurrentObservedLocation)

}

}

}

}

PROCESS {

foreach ($VMItem in $VM)

{

$VMObject = $null

try

{

switch ($VMItem.GetType().FullName)

{

‘Microsoft.HyperV.PowerShell.VirtualMachine’

{

$VMObject = Get-WmiObject -ComputerName $VM.ComputerName -Namespace ‘rootvirtualizationv2’ -Class ‘Msvm_ComputerSystem’ -Filter (‘Name=”{0}”‘ -f $VMItem.Id) -ErrorAction Stop

}

‘System.GUID’

{

$VMObject = Get-WmiObject -ComputerName $ComputerName -Namespace ‘rootvirtualizationv2’ -Class ‘Msvm_ComputerSystem’ -Filter (‘Name=”{0}”‘ -f $VMItem) -ErrorAction Stop

}

‘System.Management.ManagementObject’

{

switch ($VMItem.ClassPath.ClassName)

{

‘Msvm_ComputerSystem’

{

$VMObject = $VMItem

}

‘MSCluster_Resource’

{

$VMObject = Get-WmiObject -ComputerName $VMItem.ClassPath.Server -Namespace ‘rootvirtualizationv2’ -Class ‘Msvm_ComputerSystem’ -Filter (‘Name=”{0}”‘ -f $VMItem.PrivateProprties.VmID) -ErrorAction Stop

}

default

{

$ArgEx = New-Object System.ArgumentException((‘Cannot accept objects of type {0}’ -f $VM.ClassPath.ClassName), ‘VM’)

throw($ArgEx)

}

}

}

‘System.String’

{

if ($VMItem -ne $ComputerName -and $VMItem -ne $env:COMPUTERNAME)

{

$VMObject = Get-WmiObject -ComputerName $ComputerName -Namespace ‘rootvirtualizationv2’ -Class ‘Msvm_ComputerSystem’ -Filter (‘ElementName=”{0}”‘ -f $VMItem) -ErrorAction Stop | select -First 1

}

}

default

{

$ArgEx = New-Object System.ArgumentException((‘Unable to process objects of type {0}’ -f $VMItem.GetType().FullName), ‘VM’)

throw($ArgEx)

}

}

if (-not $VMObject)

{

throw(‘Unable to process input object {0}’ -f $VMItem.ToString())

}

}

catch

{

Write-Error -Exception $_.Exception -ErrorAction Continue

continue

}

$VMObjectComputerName = $VMObject.__SERVER

$RelatedVMSettings = $VMObject.GetRelated(‘Msvm_VirtualSystemSettingData’) | select -Unique

$VMSettings = $RelatedVMSettings | where -Property VirtualSystemType -eq ‘Microsoft:Hyper-V:System:Realized’

$VMHardDisks = $null

$VMHardDisks = $RelatedVMSettings.GetRelated() | where -Property ResourceSubType -eq ‘Microsoft:Hyper-V:Virtual Hard Disk’ -ErrorAction SilentlyContinue

$VMRemovableDisks = $null

$VMRemovableDisks = $RelatedVMSettings.GetRelated() | where -Property ResourceSubType -match ‘Microsoft:Hyper-V:Virtual (CD/DVD|Floppy) Disk’ -ErrorAction SilentlyContinue

$RootLocation = Parse-Location -Path $VMSettings.ConfigurationDataRoot -DisksOnly $DisksOnly

$Report = New-StorageConsistencyReport

$Report.Name = $VMObject.ElementName

$Report.VMId = $VMObject.Name

$Report.ComputerName = $VMObjectComputerName

$Report.Consistent = $true

Process-Location -Report ([ref]$Report) -Component ‘Configuration’ -Location $VMSettings.ConfigurationDataRoot -DisksOnly $DisksOnly -RootLocation $RootLocation

Process-Location -Report ([ref]$Report) -Component ‘Checkpoints’ -Location $VMSettings.SnapshotDataRoot -DisksOnly $DisksOnly -RootLocation $RootLocation

Process-Location -Report ([ref]$Report) -Component ‘SecondLevelPaging’ -Location $VMSettings.SwapFileDataRoot -DisksOnly $DisksOnly -RootLocation $RootLocation

foreach ($VMHardDisk in $VMHardDisks)

{

Process-Location -Report ([ref]$Report) -Component ‘Virtual Hard Disk’ -Location $VMHardDisks.HostResource[0] -DisksOnly $DisksOnly -RootLocation $RootLocation -TrimFile $true -IgnoreVHDFolder $IgnoreVHDFolder.ToBool()

}

foreach ($VMRemovableDisk in $VMRemovableDisks)

{

Process-Location -Report ([ref]$Report) -Component ‘CD/DVD Image’ -Location $VMRemovableDisk.HostResource[0] -DisksOnly $DisksOnly -RootLocation $RootLocation -TrimFile $true -IgnoreVHDFolder $IgnoreVHDFolder.ToBool()

}

$Report

}

}

#} # Uncomment this line to use as a dot-sourced function or in a profile. Also “function” and opening brace lines near top of script

Device Naming for Network Adapters in Hyper-V 2016

Not all of the features introduced with Hyper-V 2016 made a splash. One of the less-published improvements allows you to determine a virtual network adapter’s name from within the guest operating system. I don’t even see it in any official documentation, so I don’t know what to officially call it. The related settings use the term “device naming”, so we’ll call it that. Let’s see how to put it to use.

Requirements for Device Naming for Network Adapters in Hyper-V 2016

For this feature to work, you need:

  • 2016-level hypervisor: Hyper-V Server, Windows Server, Windows 10
  • Generation 2 virtual machine
  • Virtual machine with a configuration version of at least 6.2
  • Windows Server 2016 or Windows 10 guest

What is Device Naming for Hyper-V Virtual Network Adapters?

You may already be familiar with a technology called “Consistent Device Naming”. If you were hoping to use that with your virtual machines, sorry! The device naming feature utilized by Hyper-V is not the same thing. I don’t know for sure, but I’m guessing that the Hyper-V Integration Services enable this feature.

Basically, if you were expecting to see something different in the Network and Sharing Center, it won’t happen:

harn_nscenterNor in Get-NetAdapter:

harn_getnetadapter

In contrast, a physical system employing Consistent Device Naming would have automatically named the network adapters in some fashion that reflected their physical installation. For example, “SLOT 4 Port 1” would be the name of the first port of a multi-port adapter installed in the fourth PCIe slot. It may not always be easy to determine how the manufacturers numbered their slots and ports, but it helps more than “Ethernet 5”.

Anyway, you don’t get that out of Hyper-V’s device naming feature. Instead, it shows up as an advanced feature. You can see that in several ways. First, I’ll show you how to set the value.

Setting Hyper-V’s Network Device Name in PowerShell

From the management operating system or a remote PowerShell session opened to the management operating system, use Set-VMNetworkAdapter:

This enables device naming for all of the virtual adapters connected to the virtual machine named sv16g2.

If you try to enable it for a generation 1 virtual machine, you get a clear error (although sometimes it inexplicably complains about the DVD drive, but eventually it gets where it’s going):

The cmdlet doesn’t know if the guest operating system supports this feature (or even if the virtual machine has an installed operating system).

If you don’t want the default “Virtual Network Adapter” name, then you can set the name at the same time that you enable the feature:

These cmdlets all accept pipeline information as well as a number of other parameters. You can review the TechNet article that I linked in the beginning of this section. I also have some other usage examples on our omnibus networking article.

Reminder: PowerShell is the only way to set the name of a Hyper-V virtual network adapter.

Note: You must reboot the guest operating system for it to reflect the change.

Setting Hyper-V’s Network Device Name in the GUI

You can use Hyper-V Manager or Failover Cluster Manager to enable this feature. Just look at the bottom of the Advanced Features sub-tab of the network adapter’s tab. Check the Enable device naming box. If that box does not appear, you are viewing a generation 1 virtual machine.

ndn_gui

Reminder: PowerShell is the only way to set the name of a Hyper-V virtual network adapter. See the preceding section for instructions.

Note: You must reboot the guest operating system for it to reflect the change.

Viewing Hyper-V’s Network Device Name in the Guest GUI

This will only work in Windows 10/Windows Server 2016 (GUI) guests. The screenshots in this section were taken from a system that still had the default name of Network Adapter.

  1. Start in the Network Connections window. Right-click on the adapter and choose Properties:
    ndn_netadvprops
  2. When the Ethernet # Properties dialog appears, click Configure:
    ndn_netpropsconfbutton
  3. On the Microsoft Hyper-V Network Adapter Properties dialog, switch to the Advanced tab. You’re looking for the Hyper-V Network Adapter Name property. The Value holds the name that Hyper-V holds for the adapter:
    ndn_display

If the Value field is empty, then the feature is not enabled for that adapter or you have not rebooted since enabling it. If the Hyper-V Network Adapter Name property does not exist, then you are using a down-level guest operating system or a generation 1 VM.

Viewing Hyper-V’s Network Device Name in the Guest with PowerShell

As you saw in the preceding section, this field appears with the adapter’s advanced settings. Therefore, you can view it with the Get-NetAdapterAdvancedProperty cmdlet. To see all of the settings for all adapters, use that cmdlet by itself.

ndn_psall

Tab completion doesn’t work for the names, so drilling down just to that item can be a bit of a chore. The long way:

Slightly shorter way:

One of many not future-proofed-but-works-today way:

For automation purposes, you need to query the DisplayValue or the RegistryValue property. I prefer the DisplayValue. It is represented as a standard System.String. The RegistryValue is represented as a System.Array of System.String (or, String[]). It will never contain more than one entry, so dealing with the array is just an extra annoyance.

To pull that field, you could use select (an alias for Select-Object), but I wouldn’t:

ndn_psselectobject

I don’t like select in automation because it creates a custom object. Once you have that object, you then need to take an extra step to extract the value of that custom object. The reason that you used select in the first place was to extract the value. select basically causes you to do double work.

So, instead, I recommend the more .Net way of using a dot selector:

You can store the output of that line directly into a variable that will be created as a System.String type that you can immediately use anywhere that will accept a String:

Notice that I injected the Name property with a value of Ethernet. I didn’t need to do that. I did it to ensure that I only get a single response. Of course, it would fail if the VM didn’t have an adapter named Ethernet. I’m just trying to give you some ideas for your own automation tasks.

Viewing Hyper-V’s Network Device Name in the Guest with Regedit

All of the network adapters’ configurations live in the registry. It’s not exactly easy to find, though. Navigate to HKEY_LOCAL_MACHINESYSTEMCurrentControlSetControlClass{4d36e972-e325-11ce-bfc1-08002be10318}. Not sure if it’s a good thing or a bad thing, but I can identify that key on sight now. Expand that out, and you’ll find several subkeys with four-digit names. They’ll start at 0000 and count upward. One of them corresponds to the virtual network adapter. The one that you’re looking for will have a KVP named HyperVNetworkAdapterName. Its value will be what you came to see. If you want further confirmation, there will also be KVP named DriverDesc with a value of Microsoft Hyper-V Network Adapter (and possibly a number, if it’s not the first).

Understanding the Windows Server Semi-Annual Channel

Microsoft has made major changes to the way that they build and release their operating systems. The new Windows Server “Semi-Annual Channel” (SAC) marks a substantial departure from the familiar release pattern that Microsoft has established. The change has pleased some people, upset many, and confused even more. With all the flash of new features, it’s easy to miss the finer points — specifically, how you, your organization, and your issues fit into the new model.

The Traditional Microsoft OS Release Model

Traditionally, Microsoft would work on pre-release builds internally and with a small group of customers, partners, and other interested stakeholders (such as MVPs). Then, they would finesse the builds with open betas (usually called “Release Candidates”). Then, there would be an RTM (release-to-manufacturing) event followed by GA (general availability). The release would then be considered “current”. It would enjoy regular updates including service packs and feature upgrades for a few years, then it would go into “extended support” where it would only get stability and security updates. While customers purchased and worked with the “current” version, work on the next version would begin in parallel.

Not every version followed this model exactly, but all of them were similar. The most recent Windows Server operating system to employ this model is Windows Server 2016.

Changes to the Model with Windows 10/Server 2016

The “Windows Insider Program” was the first major change to Microsoft’s OS build and release model. Initially, it was most similar to the “release candidate” phase of earlier versions. Anyone could get in and gain access to Windows 10 builds before Windows 10 could even be purchased. However, it deviated from the RC program in two major ways:

  • The Windows Insider Program includes an entire community.
  • The Windows Insider Program continues to provide builds after Windows 10 GA

The Windows Insider Community

Most of us probably began our journey to Windows Insider by clicking an option in the Windows 10 update interface. However, you can also sign up using the dedicated Windows Insider web page. You get access to a dedicated forum. And, of course, you’ll get e-mail notifications from the program team. You can tell Microsoft what you think about your build using the Feedback Hub. That applet is not exclusive to Insiders, but they’ll know if you’re talking about an Insider build or a GA build.

Ongoing Windows Insider Builds

I expect that most Insiders prize access to new builds of Windows 10 above the other perks of the program. The Windows 10 Insider Program allows you to join one of multiple “rings” (one per joined Windows 10 installation). The ring that an installation belongs to dictates how close it will be to the “cutting edge”. You can read up on these rings and what they mean on the Insider site.

The most important thing about Windows Insider builds — and the reason that I brought them up at all in this article — is that they are not considered production-ready. The fast ring builds will definitely have problems. The preview release builds will likely have problems. You’re not going to get help for those problems outside of the Insider community, and any fix will almost certainly include the term “wait for the next build” (or the next… or the one after… or some indeterminate future build). I suspect that most software vendors will be… reluctant… to officially support any of their products on an Insider build.

Windows Server Insider Program

The Windows Server Insider Program serves essentially the same purpose as the Windows 10 Insider Program, but for the server operating system. The sign-up process is a bit different, as it goes through the Windows Insider Program for Business site. The major difference is the absence of any “rings”. Only one current Windows Server Insider build exists at any given time.

Introducing the Windows Server Semi-Annual Channel

I have no idea what you’ve already read, so I’m going to assume that you haven’t read anything. But, I want to start off with some very important points that I think others gloss over or miss entirely:

  • Releases in the Windows Server Semi-Annual Channel are not Windows Server 2016! Windows Server 2016 belongs to the Long-Term Servicing Channel (LTSC). The current SAC is simply titled “Windows Server, version 1709”.
  • You cannot upgrade from Windows Server 2016 to the Semi-Annual Channel. For all I know, that might change at some point. Today, you can only switch between LTSC and SAC via a complete wipe-and-reinstall.
  • On-premises Semi-Annual Channel builds require Software Assurance (I’d like to take this opportunity to point out: so does Nano). I haven’t been in the reseller business for a while so I don’t know the current status, but I was never able to get Software Assurance added to an existing license. It was always necessary to purchase it at the same time as its base volume Windows Server license. I don’t know of any way to get Software Assurance with an OEM build. All of these things may have changed. Talk to your reseller. Ask questions. Do your research. Do not blindly assume that you are eligible to use an SAC build.
  • The license for Windows Server is interchangeable between LTSC and SAC. Meaning that, if you are a Software Assurance customer, you’ll be able to download/use either product per license count (but not both; 1 license count = 1 license for LTSC or 1 license for SAC).
  • The keys for Windows Server are not interchangeable between LTSC and SAC. I’m not yet sure how this will work out for Automatic Virtual Machine Activation. I did try adding the WS2016 AVMA key to a WS1709 guest and it did not like that one bit.
  • SAC does not offer the Desktop Experience. Meaning, there is no GUI. There is no way to install a GUI. You don’t get a GUI. You get only Core.
  • Any given SAC build might or might not have the same available roles and features as the previous SAC build. Case in point: Windows Server, version 1709 does not support Storage Spaces Direct.
  • SAC builds are available in Azure.
  • SAC builds are supported for production workloads. SAC follows the Windows Server Insider builds, but SAC is not an Insider build.
  • SAC builds will only be supported for 18 months. You can continue using a specific SAC build after that period, but you can’t get support for it.
  • SAC builds should release roughly every six months.
  • SAC builds will be numbered for their build month. Ex: 1709 = “2017 September (09)”.
  • SAC ships in Standard and Datacenter flavors only.

what is the windows server semi-annual channel

The Semi-Annual Channel is Not for Everyone

Lots of people have lots of complaints about the Windows Server Semi-annual Channel. I won’t judge the reasonableness or validity of any of them. However, I think that many of these complaints are based on a misconception. People have become accustomed to a particular release behavior, so they expected SAC to serve as vNext of Windows Server 2016. Looking at Microsoft’s various messages on the topic. I don’t feel like they did a very good job explaining the divergence. So, if that’s how you look at it, then it’s completely understandable that you’d feel like WS1709 slapped you in the face.

However, it looks different when you realize that WS1709 is not intended as a linear continuation. vNext of Windows Server 2016 will be another release in the LTSC cycle. It will presumably arrive sometime late next year or early the following year, and it will presumably be named Windows Server 2018 or Windows Server 2019. Unless there are other big changes in our future, it will have the Desktop Experience and at least the non-deprecated roles and features that you currently have available in WS2016. Basically, if you just follow the traditional release model, you can ignore the existence of the SAC releases.

Some feature updates in SAC will also appear in LTSC updates. As an example, both WS1709 and concurrent WS2016 patches introduce the ability for containers to use persistent data volumes on Cluster Shared Volumes.

Who Benefits from the Semi-Annual Channel?

If SAC is not meant for everyone, then who should use it? Let’s get one thing out of the way: no organization will use SAC for everything. The LTSC will always have a place. Do not feel like you’re going to be left behind if you stick with the LTSC.

I’ll start by simplifying some of Microsoft’s marketing-speak about targeted users:

  • Organizations with cloud-style deployments
  • Systems developers

Basically, you need to have something akin to a mission-critical level of interest in one or more of these topics:

  • Containers and related technologies (Docker, Kubernetes, etc.)
  • Software-defined networking
  • High-performance networking. I’m not talking about the “my test file copy only goes 45Mbps” flavor of “high performance” networking, but the “processing TCP packets between the real-time interface and its OLTP database causes discernible delays for my 150,000 users” flavor.
  • Multiple large Hyper-V clusters

Read the “What’s New” article for yourself. If you can’t find any must-have-yesterdays in that list, then don’t worry that you might have to wait twelve to eighteen months for vNext of LTSC to get them.

Who Benefits from the Long-Term Servicing Channel?

As I said, the LTSC isn’t going anywhere. Not only that, we will all continue to use more LTSC deployments than SAC deployments.

Choose LTSC for:

  • Stability. Even though SAC will be production-ready, the lead time between initial conception and first deployment will be much shorter. The wheel for new SAC features will be blocky.
  • Predictability: The absence of S2D in WS1709 caught almost everyone by surprise. That sort of thing won’t happen with LTSC. They’ll deprecate features first to give you at least one version’s worth of fair warning. (Note: S2D will return; it’s not going away).
  • Third-party applications: We all have vendors that are still unsure about WS2008. They’re certainly not going to sign off on SAC builds.
  • Line-of-business applications: Whether third-party or Microsoft, the big app server that holds your organization up doesn’t need to be upgraded twice each year.
  • The GUI.

What Does SAC Mean for Hyper-V?

The above deals with Windows Server Semi-Annual Channel in a general sense. Since this is a Hyper-V blog, I can’t leave without talking about what SAC means for Hyper-V.

For one thing, SAC does not have a Hyper-V Server distribution. I haven’t heard of any long-term plans, so the safest bet is to assume that future releases of Hyper-V Server will coincide with LTSC releases.

As far the Hyper-V role, it perfectly fits almost everything that SAC targets. Just look at the new Hyper-V-related features in 1709:

  • Enhanced VM load-balancing
  • Storage of VMs in storage-class memory (non-volatile RAM)
  • vPMEM
  • Splitting of “guest state” information out of the .vmrs file into its own .vmgs file
  • Support for running the host guardian service as a virtual machine
  • Support for Shielded Linux VMs
  • Virtual network encryption

Looking at that list, “Shielded Linux VMs” seems to have the most appeal to a small- or medium-sized organization. As I understand it, that’s not a feature so much as a support statement. Either way, I can shield a Linux VM on my fully-patched Windows Server 2016 build 1607 (LTSC) system.

As for the rest of the features, they will find the widest adoption in larger, more involved Hyper-V installations. I obviously can’t speak for everyone, but it seems to me that anyone that needs those features today won’t have any problems accepting the terms that go along with the switch to SAC.

For the rest of us, Hyper-V in LTSC has plenty to offer.

What to Watch Out For

Even though I don’t see any serious problems that will result from sticking with the LTSC, I don’t think this SKU split will be entirely painless.

For one thing, the general confusion over “Windows Server 2016” vs. “Windows Server, version 1703” includes a lot of technology authors. I see a great many articles with titles that include “Windows Server 2016 build 1703”. So, when you’re looking for help, you’re going to need to be on your toes. I think the limited appeal of the new features will help to mitigate that somewhat. Still, if you’re going to be writing, please keep the distinction in mind.

For another, a lot of technology writers (including those responsible for documentation) work only with the newest, hottest tech. They might not even think to point out that one feature or another belongs only to SAC. I think that the smaller target audience for the new features will keep this problem under control, as well.

The Future of LTSC/SAC

All things change. Microsoft might rethink one or both of these release models. Personally, I think they’ve made a good decision with these changes. Larger customers will be able to sit out on the bleeding edge and absorb all the problems that come with early adoption. By the time these features roll into LTSC, they’ll have undergone solid vetting cycles on someone else’s production systems. Customers in LTSC will benefit from the pain of others. That might even entice them to adopt newer releases earlier.

Most importantly, effectively nothing changes for anyone that sticks with the traditional regular release cycle. Windows Server Semi-Annual Channel offers an alternative option, not a required replacement.

Altaro VM Backup Voted Best Backup Product of the Year 2017

We are delighted to announce that Altaro has been voted Backup and Recovery/Archive Product of the Year 2017 at the prestigious annual IT industry SVC Awards 2017 beating other many other well-known backup software developers. We are especially happy about this award because it’s voted by end-users and the IT community. Thank you to everyone who voted for us!

About the SVC Awards

The SVC Awards reward the products, projects, and services as well as honor companies and teams operating in the cloud, storage and digitalization sectors. The SVC Awards recognize the achievements of end-users, channel partners and vendors alike and in the case of the end-user categories, there will also be an award made to the supplier supporting the winning organization. (from svcawards.com)

Altaro VM Backup in 2017

2017 has been a very productive year for Altaro. Although the product was already very well received by system administrators around the world in 2016, we brought in a number of key features in 2017 that have brought the product to new heights. We started the year by launching Version 7 of Altaro VM Backup and adding Augmented Inline Deduplication technology to the software package. In May we brought the highly praised Cloud Management Console (CMC) to end users, in June we added the Backup Health Monitor, and in July we rolled-out the ability for our customers to offsite backup to Azure.

In 2017, we reached several customer milestones as our user base surpassed 40,000 customers and year-on-year growth hit 40%. More than 400,000 Hyper-V and VMware virtual machines are now being protected using Altaro VM Backup. More than 10,000 Altaro customers are now connected to the Multi-Tenant Cloud Management Console, and after launching the Altaro MSP program less than 12 months ago in late 2016, the service has already signed up more than 500 MSPs to its monthly subscription program.

Phew! It’s been a very busy year for Altaro and the recognition as Best Backup and Recovery/Archive Product of the Year 2017 at the SVC awards is the icing on the cake. Thank you to all our partners, distributors and end-users for continuing to embrace Altaro VM Backup and providing the feedback we need to continue growing and developing the software to meet your needs. However, the work doesn’t stop here; we have even more exciting new features in development for Altaro VM Backup that we’ll be releasing next year. Bring on 2018!