Category Archives: Hyper-V

How to Select a Placement Policy for Site-Aware Clusters

One of the more popular failover clustering enhancements in Windows Server 2016 and 2019 is the ability to define the different fault domains in your infrastructure. A fault domain lets you scope a single point of failure in hardware, whether this is a Hyper-V host (a cluster node), its enclosure (chassis), its server rack or an entire datacenter. To configure these fault domains, check out the Altaro blog post on configuring site-aware clusters and fault domains in Windows Server 2016 & 2019. After you have defined the hierarchy between your nodes, chassis, racks, and sites then the cluster’s placement policies, failover behavior, and health checks will be optimized. This blog will explain the automatic placements policies and advanced settings you can use to maximize the availability of your virtual machines (VMs) with site-aware clusters.

Site-Aware Placement Based on Storage Affinity

From reading the earlier Altaro blog about fault-tolerance, you may recall that the resiliency is created by distributing identical (mirrored) storage spaces direct (S2D) disks across the different fault domains.  Each node, chassis, rack or site may contain a copy of a VM’s virtual hard disks. However, you always want the VM to be in the same site as its disk for performance reasons to avoid having the I/O transmitted across distance. In the event that a VM is forced to start in a separate site from its disk, then it will automatically live migrate the VM to the same site as its disk after about a minute.  With site-awareness, the automatic enforcement of storage affinity between a VM and its disk is given the highest site placement priority.

Configuring Preferred Sites with Site-Aware Clusters

If you have configured multiple sites in your infrastructure, then you should consider which site is your “primary” site and which should be used as a backup. Many organizations will designate their primary site as the location closest to their customers or with the best hardware, and the secondary site as the failover location which may have limited hardware to only support critical workloads.  Some enterprises may deploy identical datacenters, and distribute specific workloads to each location to balance their resources. If you are splitting your workloads across different sites you can assign each clustered workload or VM (cluster group) a preferred site. Let’s say that you want your US-East VM to run in your primary datacenter and your US-West VM to run in your secondary datacenter, you could configure the following settings via PowerShell:

Designating a preferred site for the entire cluster will ensure that after a failure that the VMs will start in this location. After you defined your sites by creating a New-ClusterFaultDomain you can use the cluster-wide property PreferredSite to set the default location to launch VMs. Below is the PowerShell cmdlet:

Be aware of your capacity if you are usually distributing your workloads across two sites and they are forced to run in a single location as performance will diminish with less hardware. Consider using the VM prioritization feature and disabling automatic VM restarts after a failure, as this will ensure that only the most important VMs will run. You can find more information from this Altaro blog on how to configure start order priority for clustered VMs.

To summarize, placement priority is based on:

  • Storage affinity
  • Preferred site for a cluster group or VM
  • Preferred site for the entire cluster

Site-Aware Placement Based on Failover Affinity

When site-awareness has been configured for a cluster, there are several automatic failover policies that are enforced behind the scenes. First, a clustered VM or group will always failover to a node, chassis or rack within the same site before it moves to a different site. This is because local failover is always faster than cross-site failover since it can bring the VM online faster by accessing the local disk and avoid any network latency between sites. Similarly, site-awareness is also honored by the cluster when a node is drained for maintenance. The VMs will automatically move to a local node, rather than a cross-site node.

Cluster Shared Volumes (CSV) disks are also site-aware. A single CSV disk can store multiple Hyper-V virtual hard disks while allowing their VMs to run simultaneously on different nodes.  However, it is important that these VMs are all running on nodes within the same site. This is because the CSV service coordinates disk write access across multiple nodes to a single disk. In the case of Storage Spaces Direct (S2D), the disks are mirrored so there are identical copies running in different locations (or sites). If VMs were writing to mirrored CSV disks in different locations and replicating their data without any coordination, it could lead to disk corruption. Microsoft ensures that this problem never occurs by enforcing all VMs which share a CSV disk to run on the local site and write to a single instance of that disk. Furthermore, CSV distributes the VMs across different nodes within the same site, balancing the workloads and write requests to that coordinate node.

Site-Aware Health Checks and Cluster Heartbeats

Advanced cluster administrators may be familiar with cluster heartbeats, which are health checks between cluster nodes. This is the primary way in which cluster nodes validate that their peers are healthy and functioning. The nodes will ping each other once per predefined interval, and if a node does not respond after several attempts it will be considered offline, failed or partitioned from the rest of the cluster. When this happens, the host is not considered an active node in the cluster and it does not provide a vote towards cluster quorum (membership).

If you have configured multiple sites in different physical locations, then you should configure the frequency of these pings (CrossSiteDelay) and the number of health check which can be missed (CrossSiteThreshold) before a node is considered failed. The greater the distance between sites, the more network latency will exist, so these values should be tweaked to minimize the chances of a false failover during times when there is high network traffic. By default, the pings are sent every 1 second (1000 milliseconds) and when 20 are missed, a node is considered unavailable and any workloads it was hosting will be redistributed. You should test your network latency and cross-site resiliency regularly to determine whether you should increase or reduce these default values. Below is an example to change the testing frequency from every 1 second to 5 seconds and the number of missed responses from 20 to 30.

By increasing these values, it will now take longer for a failure to be confirmed and failover to happen resulting in greater downtime. The default time is 1-second x 20 misses = 20 seconds, and this example extends it to 5 seconds x 30 misses = 150 seconds.

Site-Aware Quorum Considerations

Cluster quorum is an algorithm that clusters use to determine whether there are enough active nodes in the cluster to run its core operations. For additional information, check out this series of blogs from Altaro about multi-site cluster quorum configuration.  In a multi-site cluster, quorum becomes complicated since there could be a different number of nodes in each site. With site-aware clusters, “dynamic quorum” will be used to automatically rebalance the number of nodes which have votes. This means that as clusters nodes drop out of membership, the number of voting nodes changes. If there are two sites with an equal number of voting nodes, then the group of nodes that are assigned to be the preferred site will stay online and run the workloads, while the lower priority site will reduce their votes and not host any VMs.

Windows Server 2012 R2 introduced a setting known as the LowerQuorumPriorityNodeID, which allowed you to set a node in a site as the least important, but this was deprecated in Windows Server 2016 and should no longer be used. The idea behind this was to easily declare which location was the least important when there were two sites with the same number of voting nodes. The site with the lower priority node would stay offline while the other partition would run the clustered workloads. That caused some confusion since the setting was only applied to a single host, but you may still see this setting referenced in blogs such as Altaro’s https://www.altaro.com/hyper-v/quorum-microsoft-failover-clusters/.

The site-awareness features added to the latest version of Window Server will greatly enhance a cluster’s resilience through a combination of user-defined policies and automatic actions. By creating the fault domains for clusters, it is easy to provide even greater VM availability by moving the workloads between nodes, chassis, racks, and sites as efficiently as possible. Failover clustering further reduces the configuration overhead by automatically applying best practices to make failover faster and keep your workloads online for longer.

Wrap-Up

Useful information yes? How many of you are using multi-site clusters in your organizations? Are you finding it easy to configure and manage? Having issues? If so, let us know in the comments section below! We’re always looking to see what challenges and successes people in the industry are running into!

Thanks for reading!


Go to Original Article
Author: Symon Perriman

How to Supercharge PowerShell Objects for Hyper-V

The best thing about working with PowerShell is that everything is an object. This makes it easy to work with the properties of the thing you need to manage without a lot of scripting or complex text parsing. For example, when using the Hyper-V PowerShell module it is trivial to get specific details of a virtual machine.

Getting virtual machine properties with PowerShell

There are many properties to this object. You can pipe the object to Get-Member to see the definition or Select-Object to show all properties and their values.

But what you want more? I usually do. For example, there is a CreationTime property for the virtual machine object. I’d like to be able to report on how old the virtual machine is. For a one-and-done approach I could write a command like this:

Defining a new custom property

Or maybe I’d like the MemoryAssigned value to be formatted in MB.

Defining a custom property for assigned memory

The syntax is to define a hashtable with 2 keys: Name and Expression. The Name value is what you want to use for your custom property name and the Expression is a PowerShell scriptblock. In the scriptblock you can run as much code as you need. Use $_ to reference the current object in the pipeline. In the last example, PowerShell looks at the MemoryAssigned value for the CentOS virtual machine object and divides it by 1MB. Then it moves on the DOM1 and divides the value (2147483648) by 1MB to arrive at 2048 and so on.

Using Select-Object this way is great for your custom scripts and functions. But suppose I want to get these values all the time when working with the object in the console interactively. I don’t want to have to type all that Select-Object code every single time. Fortunately, there is another approach.

PowerShell’s Extensible Type System

PowerShell has an extensible type system. This means you can extend or modify the type definition of an object. PowerShell takes advantage of this feature all the time. Many of the properties you’ve seen in common objects have been added by Microsoft. You can use the Get-TypeData cmdlet to view what additions have been made to a given type.

default extensions to the virtual machine object

You can see these additions with Get-Member.

Viewing the type additions

By now, you are thinking, “How can I do this?”. The good news is that it isn’t too difficult. Back in the PowerShell stone age you would have had to create a custom XML file, which you can still do. But I think it is just as easy to use the Update-TypeData cmdlet. I’ll give you some examples, but please take the time to read the full help and examples. One thing to keep in mind is that any type extensions you make last only as long as your PowerShell session is running. The next time you start PowerShell you will have to re-define them. I dot source a script file in my profile script that adds my type customizations.

Updating Type Data

The first thing you will need to know is the object’s type name. You can see that when piping to Get-Member.

When you define a new type you need to determine the membertype. This will be something like a ScriptProperty or Alias. A ScriptProperty is a value that is calculated by running some piece of PowerShell code. If you’ve created a custom type with Select-Object to test, as I did earlier, you will re-use that expression scriptblock. And of course, you need to define a name for your new property.

With this code, I’m defining a new script property called Age. The value will be the result of the scriptblock that subtracts the CreationTime property of the object from now. The one difference to note compared to my Select-Object version is that here use $this instead of $_.

I went ahead and also created an alias property of “Created” that points to “CreationTime”. If you try to create a type extension that already exists PowerShell will complain. I typically use -Force to overwrite any existing definitions. But now I can see these new type members.

Verifying new VM type extensions

And of course, I can use them in PowerShell.

Using the new type extensions in PowerShell

There’s really no limit to what you can do. Here is a script property that tells me when the VM started.

and here is some code that takes the memory properties and creates an ‘MB’ version of each one. I’m using some PowerShell scripting and splatting to simplify the process.

Now I have all sorts of information at my fingertips.

Taking advantage of all the new properties

As long as I run the Update-Typedata commands I will have this information available. In my PSHyperVTools module, which you can install from the PowerShell Gallery, it will add some of these type extensions automatically, plus a few more. You can read about it in the project’s GitHub repository.

Does this look like something you would use? What sort of type extensions did you come up with? What else do you wish you could see or do? Comments and feedback are always welcome.


Go to Original Article
Author: Jeffery Hicks

New Features Added to Altaro Office 365 Backup for Businesses

Our developers have been hard at work again recently and I’m happy to be able to bring you some great product news once again!

SharePoint Online and OneDrive for Business Data Protection Now Available in Altaro Office 365 Backup

It often comes as a surprise that Microsoft doesn’t provide point-in-time restoration capabilities long-term for SharePoint Online and One Drive for Business. Many businesses find themselves in a situation where they require longer point-in-time capabilities than what is provided by Microsoft directly. This is where our latest product offering comes in. Altaro Office 365 backs up Office 365 mailboxes as well as SharePoint Online and One Drive for Business files simply and quickly, so you can rest easy that your data is safe and recoverable when the need arises.

If you’re familiar with the Altaro suite of backup solutions, you have likely heard about the addition of support for both SharePoint Online and OneDrive for Business in our recent Altaro Office 365 Backup Announcement for MSPs. Both services are key for the storage of next-generation workloads and this update has been well received as a result. The good news for you is that we’re now bringing that same SharePoint Online and One Drive for Business support to those privately held companies (non-MSPs) and IT departments! This means that even if you’re not an MSP, you’ll still get to take advantage of these great data protection features!

Altaro Office 365 Backup

Other Updates

We’ve made a few other improvements to the product as well! Some things that have been requested by popular demand, and we are happy to include them as well!

Tamper-Proof Audit Logs – This ensures that companies can meet their compliance requirements with respect to changes in content, such as user backup enablement or suspension, content restoration from a mailbox, OneDrive or SharePoint, and browsing user data. It is possible to export the audit.

Restricted User Account Access – Administrators now have the ability to prevent certain users in their team from browsing backups or performing restores. This will provide a bit more granular access and protection to your team and organization.

Free 30-day trial

Start your Altaro Office 365 Backup free trial with no commitment for 30 days.

Read more about this great product that is making waves in the industry!

Andy Syrewicze

I currently have the distinct pleasure of acting as a Technical Evangelist for Altaro Software, makers of Altaro VM Backup. I’m heavily involved in IT community, on Altaro’s behalf, in a number of different ways, including, podcasts, webinars, blogging and public speaking. Prior to that, I spent the last 12+ years providing technology solutions across several industry verticals working for MSPs and Internal IT Departments. My areas of focus include, Virtualization, Cloud Services, VMware and the Microsoft Server Stack, with an emphasis on Hyper-V and Clustering. Outside of my day job, I spend a great deal of time working with the IT community, I’m a published author, and I’ve had the great honor of being named a Cloud and Datacenter Management MVP by Microsoft. I have a passion for technology and always enjoy talking about tech with peers, customers and IT pros over a cup of coffee or a cold beer.

Go to Original Article
Author: Andy Syrewicze

How to Create and Manage Hot/Cold Tiered Storage

When I was working in Microsoft’s File Services team around 2010, one of the primary goals of the organization was to commoditize storage and make it more affordable to enterprises. Legacy storage vendors offered expensive products, often consuming a majority of the budget of the IT department and they were slow to make improvements because customers were locked in. Since then, every release of Windows Server has included storage management features which were previously only provided by storage vendors, such as deduplication, replication, and mirroring. These features could be used to manage commodity storage arrays and disks, reducing costs and eliminating vendor lock-in. Windows Server now offers a much-requested feature, the ability to move files between different tiers of “hot” (fast) storage and “cold” (slow) storage.

Managing hot/cold storage is conceptually similar to computer memory cache but at an enterprise scale. Files which are frequently accessed can be optimized to run on the hot storage, such as faster SSDs. Meanwhile, files which are infrequently accessed will be pushed to cold storage, such as older or cheaper disks. These lower priority files will also take advantage of file compression techniques like data deduplication to maximize storage capacity and minimize cost. Identical or varying disk types can be used because the storage is managed as a pool using Windows Server’s storage spaces, so you do not need to worry about managing individual drives. The file placement is controlled by the Resilient File System (ReFS), a file system which is used to optimize and rotate data between the “hot” and “cold” storage tiers in real-time based on their usage. However, using tiered storage is only recommended for workloads that are not regularly accessed. If you have permanently running VMs or you are using all the files on a given disk, there would be little benefit in allocating some of the disk to cold storage. This blog post will review the key components required to deploy tiered storage in your datacenter.

Overview of Resilient File System (ReFS) with Storage Tiering

The Resilient File System was first introduced in Windows Server 2012 with support for limited scenarios, but it has been greatly enhanced through the Windows Server 2019 release. It was designed to be efficient, support multiple workloads, avoid corruption and maximize data availability. More specifically to tiering though, ReFS divides the pool of storage into two tiers automatically, one for high-speed performance and one of maximizing storage capacity. The performance tier receives all the writes on the faster disk for better performance. If those new blocks of data are not frequently accessed, the files will gradually be moved to the capacity tier. Reads will usually happen from the capacity tier, but can also happen from the performance tier as needed.

Storage Spaces Direct and Mirror-Accelerated Parity

Storage Spaces Direct (S2D) is one of Microsoft’s enhancements designed to reduce costs by allowing servers with Direct Attached Storage (DAS) drives to support Windows Server Failover Clustering. Previously, highly-available file server clusters required some type of shared storage on a SAN or used an SMB file share, but S2D allows for small local clusters which can mirror the data between nodes. Check out Altaro’s blog on Storage Spaces Direct for in-depth coverage on this technology.

With Windows Server 2016 and 2019, S2D offers mirror-accelerated parity which is used for tiered storage, but it is generally recommended for backups and less frequently accessed files, rather than heavy production workloads such as VMs. In order to use tiered storage with ReFS, you will use mirror-accelerated parity. This provides decent storage capacity by using both mirroring and a parity drive to help prevent and recover from data loss. In the past, mirroring and parity would conflict and you would usually have to select one of the other.  Mirror-accelerator parity works with ReFS by taking writes and mirroring them (hot storage), then using parity to optimize their storage on disk (cold storage). By switching between these storage optimizations techniques, ReFS provides admins with the best of both worlds.

Creating Hot and Cold Tiered Storage

When configuring hot and cold storage you get to define the ratio of the hot and cold storage. For most workloads, Microsoft recommends allocating 20% to hot and 80% to cold. If you are using high-performance workloads, consider having more hot memory to support more writes. On the flip-side, if you have a lot of archival files, then allocate more cold memory. Remember that with a storage pool you can combine multiple disk types under the same abstracted storage space. The following PowerShell cmdlets show you how to configure a 1,000 GB disk to use 20% (200 GB) for performance (hot storage) and 80% (800 GB) for capacity (cold storage).

Managing Hot and Cold Tiered Storage

If you want to increase the performance of your disk, then you will allocate a great percentage of the disk to the performance (hot) tier. In the following example we use the PowerShell cmdlets to create a 30:70 ratio between the tiers:

Unfortunately, this resizing only changes the ratios of the disks but does not change the size of the partition or volume, so you likely also want to change these using the Resize-Volumes cmdlets.

Optimizing Hot and Cold Storage

Based on the types of workloads you are using, you may wish to further optimize when data is moved between hot and cold storage, which is known as the “aggressiveness” of the rotation. By default, the hot storage will use wait until 85% of its capacity is full before it begins to send data to the cold storage. If you have a lot of write traffic going to the hot storage then you want to reduce this value so that performance-tier data gets pushed to the cold storage quicker. If you have fewer write requests and want to keep data in hot storage longer then you can increase this value. Since this is an advanced configuration option, it must be configured via the registry on every node in the S2D cluster, and it also requires a restart. Here is a sample script to run on each node if you want to change the aggressiveness so that it swaps files when the performance tier reaches 70% capacity:

You can apply this setting cluster-wide by using the following cmdlet:

NOTE: If this is applied to an active cluster, make sure that you reboot one node at a time to maintain service availability.

Wrap-Up

Now you should be fully equipped with the knowledge to optimize your commodity storage using the latest Windows Server storage management features. You can pool your disks with storage spaces, use storage spaces direct (S2D) to eliminate the need for a SAN, and ReFS to optimize the performance and capacity of these drives.  By understanding the tradeoffs between performance and capacity, your organization can significantly save on storage management and hardware costs. Windows Server has made it easy to centralize and optimize your storage so you can reallocate your budget to a new project – or to your wages!

What about you? Have you tried any of the features listed in the article? Have they worked well for you? Have they not worked well? Why or why not? Let us know in the comments section below!


Go to Original Article
Author: Symon Perriman

How to Customize Site-Aware Clusters and Fault Domains

In this guide, we’ll cover how to create fault domains and configure them in Windows Server 2019. We will also run down the different layers of resiliency provided by Windows Server and fault domain awareness with Storage Spaces Direct. Let’s get started!

Resiliency and High-Availability

Many large organizations deploy their services across multiple data centers to not only provide high-availability but to also support disaster recovery (DR). This allows services to move from servers, virtualization hosts, cluster nodes or clusters in one site to hardware in a secondary location. Prior to the Windows Server 2016 release, this was usually done through deploying a multi-site (or “stretched”) failover cluster. This solution worked well, but it had some gaps in its manageability, namely that it was never easy to determine what hardware was running at each site. Virtual machines (VMs) were also limited to move between cluster nodes and sites, but had no other mobile granularity, even though most datacenters organize their hardware by chassis and racks. With Windows Server 2016 and 2019, Microsoft now provides organizations with the ability to not only have server high-availability but also resiliency to chassis or rack failures and integrated site awareness through “fault domains”.

What is a Fault Domain?

A fault domain is a set of hardware components that have a shared single point of failure, such as a single power source. To provide fault tolerance, you need to have multiple fault domains so that a VM or service can move from one fault domain to another fault domain, such as from one rack to another.

The following image helps you identify these various datacenter components.

Defining a Node, Chassis, Rack and Site for Fault Domains

Defining a Node, Chassis, Rack, and Site for Fault Domains

Source: https://ccsearch.creativecommons.org/photos/c461e460-6a99-4421-b5f1-906e74c9446b

Configuring Fault Domains in Windows Server 2019

First, let’s review the different layers of resiliency now provided by Windows Server. The following table shows the hierarchy of the Windows Server hardware stack:

Fault Domain High-Availability
Application Failover clustering is used to automatically restart an application’s services or move it to another cluster node. If the application is running inside a virtual machine (VM) then guest clustering can be used which creates a cluster of virtualized hosts.
Virtual Machine Virtual machines (VMs) run on a failover cluster and can be restarted or failover to another cluster node. A virtualized application can run inside the VM.
Node (Server / Host) A server can move its application to another node on the same chassis using failover clustering. The server is the single point of failure, and this could be caused by an operating system crash.
Chassis A server which has a chassis failure can move to another chassis in the same rack. A chassis is commonly used with blade servers and its single point of failure could be a single power source or fan.
Rack A server which has a rack failure can move to another rack in the same site. A rack may have a single point of failure from its top of rack (TOR) switch.
Site If an entire site is lost, such as from a natural disaster, a server can move to a secondary site (datacenter).

This implementation of fault domains lets you move nodes between different chassis, racks, and sites. Remember that hardware components are only defined by the software, and they do not change the physical configuration of your datacenter. This means that if two nodes are in the same physical chassis and it fails, then both will go offline, even if you have declared them to be in different fault domains via the management interface.

This blog will specifically focus on the latest site availability and fault domain features, but check out Altaro’s blogs on Failover Clustering for more information. Additionally, as a side-note, Altaro VM Backup can provide DR and recovery functionality with its replication engine if desired.

Fault Domain Awareness with Storage Spaces Direct

A key scenario for using fault domains is to distribute your data, not just across different disks and nodes, but also across different chassis, racks and sites so that it is always available in case of an outage. Microsoft implements this using storage spaces direct (S2D) which divides cloned disks across these different fault domains. This allows you to deploy commodity storage drives in your datacenters, and its data is automatically replicated between each disk. In the initial release of S2D, the disks were mirrored between two cluster nodes, that if one failed, the data was already available on the second server. With the added layers of chassis and rack-awareness, additional disks can be created and distributed across different nodes, chassis, racks, and sites, providing granular resiliency throughout the different hardware layers. This means that if a node crashes, the data is still available elsewhere within the same chassis. If an entire chassis loses its power, a copy is on another chassis within the same rack. If a rack becomes unavailable due to a TOR switch misconfiguration, the data can be recovered from another rack. And if the datacenter fails, a copy of the disk is available at the secondary site.

One important consideration is that the site awareness and fault domains must be configured before storage spaces direct is set up. If your S2D cluster is already running and you are configuring fault domains later, you must manually move your nodes int the correct fault domains, first evicting the node from your cluster and its drive from your storage pool using the cmdlet:

Creating Fault Domains

Once you have deployed your hardware and understand the different layers in your hardware stack you will need to enable fault domain awareness. Since a minority of Windows Server users have multiple datacenters, this must be enabled through a command run from any node. Microsoft wanted to avoid exposing it directly through the GUI interface so that inexperienced users did not accidentally turn it on and expect an operation that they lacked the hardware to support. Enable fault domains using the following PowerShell cmdlet:

Remember that this hardware configuration is hierarchical, so nodes are part of chassis, which are stored in racks, which reside in sites. Your nodes will use the actual node name, and this is set automatically, such as N1.contoso.com. Next, you can define all of the different chassis, racks, and sites in your environment using friendly names and descriptions. This is helpful because your event logs will reflect your naming conventions, making troubleshooting easy.

You can name each of your chassis to match your hardware specs, such as a “Chassis 1”.

Next you can assign names to your racks, such as “Rack 1”.

Finally define any sites you have and provide them with a friendly name, like “Primary” or “Seattle”.

For each of these types, you can also use the -Description or -Location switch to add additional contextual information which is displayed in event logs, making troubleshooting and hardware maintenance easier.

Configuring Fault Domains

Once you have defined the different fault domains, you can configure their hierarchical structure using a parent (and child) relationship. Starting with the node, you define which chassis they belong to and then move up the chain. For example, you may configure nodes N1 and N2 to be part of chassis C1 using PowerShell:

Similarly, you may set chassis C1 and C2 to reside in rack R1:

Then configure racks R1 and R2 within the primary datacenter using:

To view your configuration and these relationships, run the Get-ClusterFaultDomain cmdlet.

You can also define the relationship of the hardware in an XML file. This method is described in Microsoft’s Fault domain awareness page. If you want to dig deeper, check out the full PowerShell syntax.

Wrap-Up

Now you are able to take advantage of the latest site-awareness features for Windows Server Failover Clustering, giving you additional resiliency throughout your hardware stack. We’ll have further content focused on this area in the near future, so stay tuned!

Finally, what about you? Do you see this being useful in your organization? Do you see any barrier to implementation? Let us know in the comments section below!


Go to Original Article
Author: Symon Perriman

How to Configure a Quorum Cloud Witness for Failover Clustering

Windows Server Failover Clusters are becoming commonplace through the industry as the high-availability solution for virtual machines (VMs) and other enterprise applications. I’ve been writing about clustering since 2007 when I joined the engineering team at Microsoft (here is one of the most referenced online articles about quorum from 2011). Even today, one of the concepts that many users continue to misunderstand is a quorum. Most admins know that is has something to do with keeping a majority of servers running, but this blog post will give more insight into why it is important to understand how it works. We will focus on the newest type of quorum configuration known as a cloud witness which was introduced in Windows Server 2016. This solution is designed to support both on-premises clusters and multi-site clusters, along with the guest clusters which can run entirely in the Microsoft Azure public cloud.

Failover Clustering Quorum Fundamentals

NOTE: This post covers quorum for Windows Server 2016 and 2019. You can also info related to quorum on an older version of Windows Server.

Outside of IT, the term “quorum” is defined in business practices as “the number of members of a group or organization required to be present to transact business legally, usually a majority” (Source: Dictionary.com). For Windows Server Failover Clustering, it means that there must be a majority of “cluster voters” online and in communication with each other for the cluster to operate. A cluster voter is either a cluster node or a disk which contains a copy of the cluster database.

The cluster database is a file which defines registry settings that identify the state of every element within the cluster, including all nodes, storage, networks, virtual machines (VMs) and applications. It also keeps track of which node should the sole owner running each application and which node can write to each disk within the cluster’s shared storage. This is so important because it prevents a “split-brain” scenario which can cause corruption in a cluster’s database. A split-brain happens when there is a network partition between two sets of clusters nodes, and they both try to run the same application and write to the same disk in an uncoordinated fashion, which can lead to disk corruption.  By designating one of these sets of cluster nodes as the authoritative servers, and forcing the secondary set to remain passive, it ensures that exactly one node runs each application and writes to each disk. The determination of which partition of clusters nodes stays online is based on which side of the partition has a majority of cluster voters, or which side has a quorum.

For this reason, you should always have an odd number of votes across your cluster, meaning 51% or more of voters.  Here is a breakdown of the behavior based on the number of voting nodes or disks:

  • 2 Votes: This configuration is never recommended because both voters must be active for the cluster to stay online. If you lose communicate between voters, the cluster stays passive and will not run any workloads until both voters (a majority) are operational and in communication with each other.
  • 3 Votes: This works fine because one voter can be lost, and the cluster will remain operational, provided that two of the three voters are healthy.
  • 4 Votes: This can only sustain the loss of one voter and three voters must be active. This is supported but requires extra hardware yet provides no additional availability benefit and a three-vote cluster.
  • 5, 7, 9 … 65 Voters: An odd number of voters are recommended to maximize availability by allowing you to lose half (rounded down) of your voters. For example, in a nine-node cluster, you can lose four voters and it will continue to operate as five voters are active.
  • 6, 8, 10 … 64 Voters: This is supported, yet you can only lose half minus one voter, so you are not maximizing your availability. In a ten-node cluster you can only four voters, so five must remain in communication with each other. This provides the same level of availability as the previous example with nine, yet requires an additional server.

Using a Disk Witness for a Quorum Vote

Based on Microsoft’s telemetry data, a majority of failover clusters around the world are deployed with two nodes, to minimize the hardware costs. Although these two nodes only provide two votes, a third vote is provided by a shared disk, known as a “disk witness”. This disk can be any dedicated drive on a shared storage configuration that is supported by the cluster and passes the Validate a Cluster tests. This disk will also contain a copy of the cluster’s database, and just like every other clustered disk, exactly one node will own access to it. It does so by creating an open file handle on that ClusDB file. In the event where there is a network partition between the two servers, then the partition that owns the disk witness will get the extra vote and run all workloads (since it has two of three votes for quorum), while the partition with a single vote will not run anything until it can communicate with the other nodes. This configuration has been supported for several releases, however, there is still a hardware cost to providing a shared storage infrastructure, which is why a cloud witness was introduced in Windows Server 2016.

Cloud Witness for a Failover Cluster

A cloud witness is designed to provide a vote to a Failover Cluster without requiring any physical hardware. It is a basically a disk running in Microsoft Azure which contains a copy of the ClusDB and is accessible by all cluster nodes. It uses Microsoft Azure Blob Storage, and a single Azure Storage Account can be used for multiple clusters, although each cluster requires it owns blob file. The cluster database file itself is very small, which means that the cost to operate this cloud-based storage is almost negligible. The configuration is fairly easy and well-documented by Microsoft in its guide to Deploy a Cloud Witness for a Failover Cluster.

You will notice that the cloud witness is fully integrated within Failover Cluster Manager’s utility, Configure Cluster Quorum Witness, where you can Configure a cloud witness.

Selecting a Cloud Witness to use in the Configure Cluster Quorum Wizard

Selecting a Cloud Witness to use in the Configure Cluster Quorum Wizard

Next, you enter the Azure storage account name, key, and service endpoint.

Entering Cloud Witness details in Configure Cluster Quorum Wizard

Entering Cloud Witness details in Configure Cluster Quorum Wizard

Now you have added an extra vote to your failover cluster with much less effort and cost than creating and managing on-premises shared storage.

Failover Clustering Cloud Witness Scenarios

To conclude this blog post we’ll summarize the ideal scenarios for using the Cloud Witness:

  • On-premises clusters with no shared storage – For any even-node clusters with no extra shared storage, then consider using a cloud witness as an odd vote to help you determine quorum. This configuration also works well with SQL Always-On clusters and Scale-Out File Server clusters which may have no shared storage.
  • Multi-site clusters – If you have a multi-site cluster for disaster recovery, you will usually have two or more nodes at each site. If these balanced sites lose connectivity with each other, you still need a cluster voter to determine which side has quorum. By placing this arbitrating vote in a third site (a cloud witness in Microsoft Azure), it can serve as a tie-breaker to determine the authoritative cluster site.
  • Azure Guest Clusters – Now that you can deploy a failover cluster entirely within Microsoft Azure using nested virtualization (also known as a “guest cluster”), you can utilize the cloud witness as an additional cluster vote. This provides you with an end-to-end high-availability solution in the cloud.

The cloud witness is a great solution provided by Microsoft to increase availability in Failover Clusters while reducing the cost to customers. It is now easy to operate a two-node cluster without having to pay for a third host or shared storage disk, whose only role is to provide a vote. Consider using the cloud witness for your cluster deployments and look for Microsoft to continue to integrate its on-premises Windows Server solutions with Microsoft Azure as the industry’s leading hybrid cloud provider.

Go to Original Article
Author: Symon Perriman

Storage Spaces Direct Hardware Requirements and Azure Stack HCI

This article will run down the hardware requirements for Storage Spaces Direct. Since Storage Spaces found its way into Windows Server with Windows Server 2012, much has changed for Microsoft Strategies regarding supported hardware.

Read More About Storage Spaces Direct

What is Storage Spaces Direct?

S2D Technologies in Focus

3 Important Things You Should Know About Storage Spaces Direct

In the first attempt, Microsoft gave customers a wide range of options to design the hardware part of the solution themselves. While this enabled customers to build a Storage Spaces Cluster out of scrap or desktop equipment, to be honest, we ended up with many non-functional clusters during that period.

After that phase and with the release of Windows Server 2016, Microsoft Decided to only support validated system configurations from ODMs or OEMs and no longer support self-built systems. For good reason!

Storage Spaces Direct Hardware Requirements

Let’s get into the specific hardware requirements

First off, every driver, device or component used for Storage Spaces Direct needs to be “Software-Defined Datacenter” compatible and also be supported for Windows Server 2016 by Microsoft.

Storage Spaces Direct Compatibility

Source: https://docs.microsoft.com/en-us/windows-server/storage/storage-spaces/storage-spaces-direct-hardware-requirements

Servers

  • Minimum of 2 servers, maximum of 16 servers
  • Recommended that all servers be the same manufacturer and model

CPU

  • Intel Nehalem or later compatible processor; or
  • AMD EPYC or later compatible processor

Memory

  • Memory for Windows Server, VMs, and other apps or workloads; plus
  • 4 GB of RAM per terabyte (TB) of cache drive capacity on each server, for Storage Spaces Direct metadata

Boot

  • Any boot device supported by Windows Server, which now includes SATADOM
  • RAID 1 mirror is not required, but is supported for boot
  • Recommended: 200 GB minimum size

Networking

Minimum (for small scale 2-3 node)

  • 10 Gbps network interface
  • Direct-connect (switchless) is supported with 2-nodes

Recommended (for high performance, at scale, or deployments of 4+ nodes)

  • NICs that are remote-direct memory access (RDMA) capable, iWARP (recommended) or RoCE
  • Two or more NICs for redundancy and performance
  • 25 Gbps network interface or higher

Drives

Storage Spaces Direct works with direct-attached SATA, SAS, or NVMe drives that are physically attached to just one server each. For more help choosing drives, see the Choosing drives topic.

Source: https://docs.microsoft.com/en-us/windows-server/storage/storage-spaces/storage-spaces-direct-hardware-requirements

You can find more hardware information at Microsoft Docs. You can also get more details about supported disk configurations etc. at that site.

Azure Stack HCI (Hyper-Converged Infrastructure)

To make it easier for the customer to choose when it comes to vendors and systems, Microsoft now combines supported Storage Spaces and Hyperconverged Systems under the label Azure Stack HCI. Despite the name Azure in the label, you will not need to buy a full-blown Azure Stack deployment in order to have Storage Spaces. However, with Windows Server 2019, Microsoft has made it much easier to find supported appliances for Storage Spaces and Hyperconverged deployments with that label.

When following Microsoft’s guidance, Azure Stack HCI is the starting point for the next generation Software-Defined Datacenter, where Azure (the public cloud) is at the end of the road.

Azure Stack HCI

Source: https://azure.microsoft.com/mediahandler/files/resourcefiles/azure-stack-hybrid-cloud-your-way-datasheet/Azure%20Stack%20hybrid%20cloud%20your%20way.pdf

To make it easier to find the right vendor and system for you, Microsoft has published the Azure Stack HCI catalog.

Here you can filter effectively and search for your organization’s requirements. This tool makes it super easy to see which vendor can offer hardware you may be targeting.

For example, I was looking for a system with the following requirements for a customer branch office:

  • Regional available in Europe
  • 2-Node Optimized
  • RDMA RoCE

As you can see in the screenshot, I got a result of 14 possible systems. Now I can contact the vendors for additional information and sizing.

Azure Stack HCI Catalog

When it comes to sizing, you should work together with your hardware vendor to get the best possible configuration. Disk sizes, types, etc. are still different from vendor to vendor and system to system.

To help you to validate the configurations or have some kind of the first idea what you need, Microsoft published a guide in their documentation named Planning volumes in Storage Spaces Direct. Additionally, one of the Program Managers for S2D, Cosmos Darwin, has published a small calculator as well.

Wrap Up

This blog post will give you a better idea of what kind of hardware you’ll need to get your hands on if you want to use S2D. This post is a critical part of putting together a successful S2D deployment. In the next part of this series on Storage Spaces Direct, we will focus more on architecture with S2D and also on the competitors on the market.

Thanks for reading!

Go to Original Article
Author: Florian Klaffenbach

Why You Should Be Using VM Notes in PowerShell

One of the nicer Hyper-V features is the ability to maintain notes for each virtual machine. Most of my VMs are for testing and I’m the only one that accesses them so I often will record items like an admin password or when the VM was last updated. Of course, you would never store passwords in a production environment but you might like to record when a VM was last modified and by whom. For single VM management, it isn’t that big a deal to use the Hyper-V manager. But when it comes to managing notes for multiple VMs PowerShell is a better solution.

In this post, we’ll show you how to manage VM Notes with PowerShell and I think you’ll get the answer to why you should be using VM Notes as well. Let’s take a look.

Using Set-VM

The Hyper-V module includes a command called Set-VM which has a parameter that allows you to set a note.

Displaying a Hyper-V VM note

As you can see, it works just fine. Even at scale.

Setting notes on multiple VMs

But there are some limitations. First off, there is no way to append to existing notes. You could get any existing notes and through PowerShell script, create a new value and then use Set-VM. To clear a note you can run Set-VM and use a value of “” for -Notes. That’s not exactly intuitive. I decided to find a better way.

Diving Deep into WMI

Hyper-V stores much in WMI (Windows Management Instrumentation). You’ll notice that many of the Hyper-V cmdlets have parameters for Cimsessions. But you can also dive into these classes which are in the root/virtualization/v2 namespace. Many of the classes are prefixed with msvm_.

Getting Hyper-V CIM Classes with PowerShell

After a bit of research and digging around in these classes I learned that to update a virtual machine’s settings, you need to get an instance of msvm_VirtualSystemSettingData, update it and then invoke the ModifySystemSettings() method of the msvm_VirtualSystemManagementService class. Normally, I would do all of this with the CIM cmdlets like Get-CimInstance and Invoke-CimMethod. If I already have a CIMSession to a remote Hyper-V host why not re-use it?

But there was a challenge. The ModifySystemSettings() method needs a parameter – basically a text version of the msvm_VirtualSystemSettingsData object. However, the text needs to be in a specific format. WMI has a way to format the text which you’ll see in a moment. Unfortunately, there is no technique using the CIM cmdlets to format the text. Whatever Set-VM is doing under the hood is above my pay grade. Let me walk you through this using Get-WmiObject.

First, I need to get the settings data for a given virtual machine.

This object has all of the virtual machine settings.

I can easily assign a new value to the Notes property.

$data.notes = “Last updated $(Get-Date) by $env:USERNAME”

At this point, I’m not doing much else than what Set-VM does. But if I wanted to append, I could get the existing note, add my new value and set a new value.

At this point, I need to turn this into the proper text format. This is the part that I can’t do with the CIM cmdlets.

To commit I need the system management service object.

I need to invoke the ModifySystemSettings() method which requires a little fancy PowerShell work.

Invoking the WMI method with PowerShell

A return value of 0 indicates success.

Verifying the change

The Network Matters

It isn’t especially difficult to wrap these steps into a PowerShell function. But here’s the challenge. Using Get-WmiObject with a remote server relies on legacy networking protocols. This is why Get-CimInstance is preferred and Get-WmiObject should be considered deprecated. So what to do? The answer is to run the WMI commands over a PowerShell remoting session. This means I can create a PSSession to the remote server using something like Invoke-Command. The connection will use WSMan and all the features of PowerShell remoting. In this session on the remote machine, I can run all the WMI commands I want. There’s no network connection required because it is local.

The end result is that I get the best of both worlds – WMI commands doing what I need over a PowerShell remoting session. By now, this might seem a bit daunting. Don’t worry. I made it easy.

Set-VMNote

In my new PSHyperVTools module, I added a command called Set-VMNote that does everything I’ve talked about. You can install the module from the PowerShell Gallery. If you are interested in the sausage-making, you can view the source code on Github at https://github.com/jdhitsolutions/PSHyperV/blob/master/functions/public.ps1. The function should make it easier to manage notes and supports alternate credentials.

Set-VMNote help

Now I can create new notes.

Creating new notes

Or easily append.

Appending notes

It might be hard to tell from this. Here’s what it looks like in the Hyper-V manager.

Verifying the notes

Most of the time the Hyper-V PowerShell cmdlets work just fine and meet my needs. But if they don’t, that’s a great thing about PowerShell – you can just create your own solution! And as you can probably guess, I will continue to create and share my own solutions right here.

Go to Original Article
Author: Jeffery Hicks

CentOS Linux on Hyper-V – A Complete Guide

Note: This article was originally published on May 2017. It has been fully updated to be current as of August 2019.

Microsoft continues turning greater attention to Linux. We can now run PowerShell on Linux, we can write .Net code for Linux, we can run MS SQL on Linux, Linux containers will run natively in Windows containers… the list just keeps growing. Linux-on-Hyper-V has featured prominently on that list for a while now, and the improvements continue to roll in.

Microsoft provides direct support for Hyper-V running several Linux distributions as well as FreeBSD. If you have an organizational need for a particular distribution, then someone already made your choice for you. If you don’t have such a mandate, you might need to make that decision yourself. I’m not a strong advocate for any particular distribution. I’ve written in the past about using Ubuntu Server as a guest. However, there are many other popular distributions available and I like to branch my knowledge.

Why Choose CentOS?

I’ve been using Red Hat’s products off and on for many years and have some degree of familiarity with them. At one time, there was simply “Red Hat Linux”. As a commercial venture attempting to remain profitable, Red Hat decided to create “Red Hat Enterprise Linux” (RHEL) which you must pay to use. With Red Hat being sensitive to the concept of free (as in what you normally think of when you hear “free”) being permanently attached to Linux in the collective conscience, they also make most of RHEL available to the CentOS Project.

One of the reasons that I chose Ubuntu was its ownership by a commercial entity. That guarantees that if you’re ever really stuck on something, there will be at least one professional organization that you can pay to assist you. CentOS doesn’t have that kind of direct backing. However, I also know from experience that relatively few administrators ever call for operating system support. The ones that will make such a call tend to work for bigger organizations that pay for RHEL or the like. The rest will call some sort of service provider, like a local IT outsourcer. With that particular need mitigated, CentOS has these strengths:

  • CentOS is based on RHEL. This is not a distribution that someone assembles in their garage (not that I have any personal opposition to such an endeavor, but almost every organization places a premium on stability and longevity in their suppliers)
  • CentOS has wide community support and familiarity. You can easily find help on the Internet. You will also not struggle to find support organizations that you can pay for help.
  • CentOS has a great deal in common with other Linux distributions. Because “Linux” is really just a kernel, and an open-source one at that, it’s theoretically possible for a distribution to completely change everything about it and build a completely unique operating system and environment. In practice, no one does. That means that the bulk of knowledge you have about any other Linux distribution is applicable to CentOS.

That hits the major points that will assure most executives that you’re making a wise decision. In the scope of Hyper-V, Microsoft’s support list specifically names CentOS.

Stable, Yet Potentially Demanding

When you use Linux’s built-in tools to download and install software, you work from approved repositories. Essentially, that means that someone decided that a particular package adequately measured up to a standard. Otherwise, you’d need to go elsewhere to acquire that package.

The default CentOS repositories are not large when compared to some other distributions. They do not contain recent versions of many common packages, including the Linux kernel. However, the versions offered in the CentOS repositories are known to be solid and stable. If you want to use more recent versions, then you’ll need to be(come) comfortable manually adding repositories and/or acquiring, compiling, and installing software.

No GUIs Here

CentOS does make at least one GUI available, but I won’t cover it. I don’t know if CentOS’s GUI requires 3D acceleration the way that Ubuntu’s does. If it does, then the GUI experience under Hyper-V would be miserable. However, I didn’t even attempt to use any CentOS GUIs because they’re really not valuable for anything other than your primary use desktop. If you’re new to Linux and the idea of going GUI-free bothers you, then take heart: Linux is a lot easier than you think it is. I don’t feel that any of the Linux GUIs score highly enough in the usability department to meaningfully soften the blow of transition anyway.

If you’ve already read my Ubuntu article, then you’ve already more or less seen this bit. Linux is easy because pretty much everything is a file. There are only executables, data, and configuration files. Executables can be binaries or text-based script files. So, any time you need to do anything, your first goal is to figure out what executable to call. Configuration files are almost always text-based, so you only need to learn what to set in the configuration file. The Internet can always help out with that. So, really, the hardest part about using Linux is figuring out which executable(s) you need to solve whatever problem you’re facing. The Internet can help out with that as well. You’re currently reading some of that help.

Enough talk. Let’s get going with CentOS.

Downloading CentOS

You can download CentOS for free from www.centos.org. As the site was arranged on the day that I wrote this article, there was a “Get CentOS” link in the main menu at the top of the screen and a large orange button stamped “Get CentOS Now” after the introductory text. From either of those, you reach a page with a few packaging options. I chose “DVD ISO” and used it to write this article. I would say that if you have a Torrent application installed, choose that option. It took me quite a bit of hunting to find a fast mirror.

For reference, I downloaded CentOS-7-x86_x64-DVD-1810.iso.

How to Build a Hyper-V Virtual Machine for CentOS

There’s no GUI and CentOS is small, so don’t create a large virtual machine. These are my guidelines:

  • 2 vCPUs, no reservation. All modern operating systems work noticeably better when they can schedule two threads as opposed to one. You can turn it up later if you’re deployment needs more.
  • Dynamic Memory on; 512MB startup memory, 256MB minimum memory, 1GB maximum memory. You can always adjust Dynamic Memory’s maximum upward, even when the VM is active. Start low.
  • 40GB disk is probably much more than you’ll ever need. I use a dynamically expanding VHDX because there’s no reason not to. The published best practice is to create this with a forced 1 megabyte block size, which must be done in PowerShell. I didn’t do this on my first several Linux VMs and noticed that they do use several gigabytes more space, although still well under 10 apiece. I leave the choice to you.
  • I had troubles using Generation 2 VMs with Ubuntu Server, but I’m having better luck with CentOS. If you use Generation 2 with your CentOS VMs on Hyper-V 2012 R2/8.1 or earlier, remember to disable Secure Boot. If using 2016, you can leave Secure Boot enabled as long as you select the “Microsoft Certification Authority”.
  • If your Hyper-V host is a member of a failover cluster and the Linux VM will be HA, use a static MAC address. Linux doesn’t respond well when its MAC addresses change.

The following is a sample script that you can modify to create a Linux virtual machine in Hyper-V: You can find a more polished edition in my GitHub repository.

If you’re going to use this a lot, consider modifying the parameter defaults to your liking. For instance, you’re probably not going to move your install ISO often.

You could also use your first installation as the basis for a clone. Use a generic name for the VM/VHDX if that’s your plan.

A Walkthrough of CentOS Installation

When you first boot, it will default to Test this media & install CentOS 7. I typically skip the media check and just Install CentOS Linux 7.

installing CentOS

It will run through several startup items, then it will load the graphical installer. Choose your language:

CentOS 7 installer

After selecting the language, you’ll be brought to the Installation Summary screen. Wait a moment for it to detect your environment. As an example, the screen initially shows Not Ready for the Security Policy. It will change to No profile selected once it has completed its checks.

CentOS Installation Destination

You can work through the items in any order. Anything without the warning triangle can be skipped entirely.

I start with the NETWORK & HOST NAME screen as that can have bearing on other items. When you first access the screen, it will show Disconnected because it hasn’t been configured yet. That’s different behavior from Windows, which will only show disconnected if the virtual network adapter is not connected to a virtual switch.

CentOS network and host name

If you’ll be using DHCP, click the Off slider button at the top right for it to attempt to get an IP. If that works, it will automatically switch to On and display the acquired IP address. For static or advanced configuration, click the Configure. I’ve shown the IPv4 Settings tab. You can type out the full octet mask instead of the CIDR shortcut, if you prefer. Fill out this tab, and/or the others, as necessary.

IPv4 settings

Don’t forget to change the host name at the lower left of the networking screen and Apply it before clicking Done to leave this screen.

After you’ve set up networking, set the DATE & TIME. If it can detect a network connection, you’ll be allowed to set the Network Time slider to On. Configure as desired.

CentOS date and time

You must click into the Installation Destination screen or the installer will not allow you to proceed. By default, it will select the entirety of the first hard drive for installation. It will automatically figure out the best way to divide storage. You can override it if you like. If you’re OK with defaults, just click Done.

CentOS installation destination

Explore the other screens as you desire. I don’t set anything else on my systems. At this point, you have handled all the required and can click Begin Installation.

installation summary

While the system installs, you’ll be allowed to set the root password and create the initial user.configuration

As you enter the password for root, the system will evaluate its strength. If it decides that the password you chose isn’t strong enough, you’ll be forced to click Done twice to confirm. The root account is the rough equivalent of the Administrator account on Windows, so do take appropriate steps to secure it with a strong password and exercise care in the keeping of that password.

root password

The user creation screen is straightforward. It has the same password-strength behavior as the root screen.

create user

Now just wait for the installation to complete. Click Finish Configuration if prompted. Once the installation completes, it will present a Reboot button. Click when ready.

The system will restart and bring you to the login screen of a completely installed CentOS virtual machine:

CentOS Linux 7 virtual machine

Assuming that you created a named user for yourself and made it administrator, log in with that account. Otherwise, you can log in as root. It’s poor practice to use the root account directly, and even worse to leave the root account logged in.

CentOS Post-Install Wrap-Up for Hyper-V

I have a bit of a chicken-and-egg problem here. You need to do a handful of things to wrap-up, but to do that easily, it helps to know some things about Linux. If you already know about Linux, this will be no problem. Otherwise, just follow along blindly. I’ll explain more afterward. CentOS doesn’t need much, fortunately.

To make this a bit easier, you might want to wait until you’ve met PuTTY. It allows for simple copy/paste actions. Otherwise, you’ll need to type things manually or use the Paste Clipboard feature in the Hyper-V VMCONNECT window. Whatever you choose, just make sure that you follow these steps sooner rather than later.

1. Install Nano

Editing text files is a huge part of the Linux world. It’s also one of the most controversial, bordering on zealotry. “vi” is the editor of choice for a great many. Its power is unrivaled; so is its complexity. I find using vi to be one of the more miserable experiences in all of computing, and I refuse to do it when given any choice. Conversely, the nano editor is about as simple as a text-editing tool can be in a character mode world and I will happily use it for everything. Install it as follows:

The command is case-sensitive and you will be prompted for your password if not logged in as root.

2. Enable Dynamic Memory In-Guest

You need to enable the Hot Add feature to use Dynamic Memory with CentOS.

Start by creating a “rules” file. The location is important (/etc/udev/rules.d) but the name isn’t. I’ll just use the same one from Microsoft’s instructions:

You may be prompted for your password.

You’ll now be looking at an empty file in the nano editor. Type or paste the following:

Now press [CTRL]+[X] to exit, then press [Y] to confirm and [Enter] to confirm the filename.

At next reboot, Dynamic Memory will be functional.

3. Install Extra Hyper-V Tools

Note: the daemons ship directly in 18.10, so you can probably skip this step.

Most of the tools you need to successfully run Linux on Hyper-V are built into the CentOS distribution. There are a few additional items that you might find of interest:

  • VSS daemon (for online backup)
  • File copy daemon so you can use PowerShell to directly transfer files in from the host
  • KVP daemon for KVP transfers to and from the host

To install them:

4. Change the Disk I/O Scheduler

By default, Linux wants to help optimize disk I/O. Hyper-V also wants to optimize disk I/O. Two optimizers are usually worse than none. Let’s disable CentOS’s.

You must be root for this.

You’ll be prompted for the root password.

The above will change the scheduler to “noop”, which means that CentOS will not attempt to optimize I/O for the primary hard disk. “exit” tells CentOS to exit from the root login back to your login.

Credit for the echo method goes to the authors at nixCraft.

10 Tips for Getting Started with CentOS Linux on Hyper-V

This section is for those with Windows backgrounds. If you already know Linux, you probably won’t get anything out of this section. I will write it from the perspective of a seasoned Windows user. Nothing here should be taken as a slight against Linux.

1. Text Acts Very Differently

Above all, remember this: Linux is CaSE-SENsiTiVe.

  • yum and Yum are two different things. The first is a command. The second is a mistake.
  • File and directory names must always be typed exactly.

Password fields do not echo anything to the screen.

2. Things Go the Wrong Way

In Windows, you’re used to C: drives and D: drives and SMB shares that start with \.

In Linux, everything begins with the root, which is just a single /. Absolutely everything hangs off of the root in some fashion. You don’t have a D: drive. Starting from /, you have a dev location, and drives are mounted there. For the SATA virtual drives in your Hyper-V machine, they’ll all be sda, sdb, sdc, etc. So, /dev/sdb would be the equivalent to your Windows D: drive. Usually, we use mount points instead of accessing files and folders from their hardware-based root. So, when you retrieve a listing of the root directory, the items you see could exist on more than one drive.

Partitions are just numbers appended to the drive. sda1, sda2, etc.

Directory separators are slashes (/) not backslashes (). A directory that you’ll become familiar with is usr. It lives at /usr.

Moving around the file system should be familiar, as the Windows command line uses similar commands. Linux typically uses ls where Windows uses dir, but CentOS accepts dir. cd and mkdir work as they do on Windows. Use rm to delete things. Use cp to copy things. Use mv to move things. Also use mv to rename things.

You cannot run an executable in the same folder just by typing its name and pressing [Enter], as in the Windows command processor. PowerShell behaves the same way, so that may not be strange to you. Use dot and slash to run a script or binary in the same folder:

Linux doesn’t use file extensions. Instead, it uses attributes. So, if you create the equivalent of a batch file and then try to execute it, Linux won’t have any idea what you want to do. You need to mark it as executable first. Do so like this:

As you might expect, -x removes the executable attribute.

The default Linux shell does have tab completion, but it’s not the same as what you find on Windows. It will only work for files and directories, for starters. Second, it doesn’t cycle through possibilities the way that PowerShell does. The first tab press works if there is only one way for the completion to work. A second tab press will show you all possible options. You can use other shells with more power than the default, although I’ve never done it.

3. Quick Help is Available

Most commands and applications have a h and/or a help parameter that will give you some information on running them. –help is often more detailed than -h. You can sometimes type  man commandname  to get other help (“man” is short for “manual”). It’s not as consistent as PowerShell help, mostly because each executable and script author must come up with their own help text. Also, PowerShell’s designers got to work with the benefits of hindsight and rigidly controlled design and distribution.

4. You Can Go Home

You’ve got your own home folder, which is the rough equivalent of the “My Documents” folder in Windows. It’s at the universal alias ~. So, cd ~  takes you to your home folder. You can reference files in it with ~/filename.

5. Boss Mode

“root” is the equivalent of “Administrator” on Windows. But, the account you made has nearly the same powers — although not exactly on demand. You won’t have root powers until you specifically ask for them with “sudo”. It’s sort of like “Run as administrator” in Windows, but a lot easier. In fact, the first time you use sudo, the intro text tells you a little bit about it:

centos_sudo

So basically, if you’re going to do something that needs admin powers, you just type “sudo” before the command, just like it says. The first time, it will ask for a password. It will remember it for a while after that. However, 99% of what I do is administrative stuff, so I pop myself into a sudo session that persists until I exit, like this:

You’ll have to enter your password once, and then you’ll be in sudo mode. You can tell that you’re in sudo mode because the dollar sign in te prompt will change to a hash sign:

centos_sudos

I only use Linux for administrative work, so I always use the account with my name on it. However, even when it’s not in sudo mode, it’s still respected as an admin-level account. If you will be using a Linux system as your primary (i.e., you’ll be logged in often), create a non-administrative account to use. You can flip to your admin account or root anytime:

Always respect the power of these accounts.

6. “Exit” Means Never Having to Say Goodbye

People accustomed to GUIs with big red Xs sometimes struggle with character mode environments. “exit” works to end any session. If you’re layered in, as in with sudo or su, you may need to type “exit” a few times. “logout” works in most, but not all contexts.

7. Single-Session is for Wimps

One of the really nifty things about Linux is multiple concurrent sessions. When you first connect, you’re in terminal 1 (tty1). Press [Alt]+[Right Arrow]. Now you’re in tty2! Keep going. 6 wraps back around to 1. [Alt]+[Left Arrow] goes the other way.

You need to be logged in to determine which terminal you’re viewing. Just type tty.

8. Patches, Updates, and Installations, Oh My.

Pretty much all applications and OS components are “packages”. “yum” and “rpm” are your package managers. They’re a bit disjointed, but you can usually find what you need to know with a quick Internet search.

Have your system check to see if updates are available (more accurately, this checks the version data on download sources):

Install package patches and upgrades:

There’s also an “upgrade” option which goes a bit further. Update is safer, upgrade gets more.

Show all installed packages that yum knows about:

The rpm tool shows different results, but for my uses yum is sufficient.

Find a particular installed package, in this case, “hyperv” (spelling/case counts!):

Look for available packages:

Install something (in this case, the Apache web server):

9. System Control

CentOS’s equivalent to Task Manager is top. Type top at a command prompt and you’ll be taken right to it. Use the up and down arrows and page up and page down to move through the list. Type a question mark [?] to be taken to the help menu that will show you what else you can do. Type [Q] to quit.

10. OK, I’m Done Now

If you’ve used the shutdown command in Windows, then you’ll have little trouble transitioning to Linux. shutdown tells Linux to shut down gracefully with a 1-minute timer. All active sessions get a banner telling them what’s coming.

Immediate shutdown (my favorite):

Reboot immediately:

There’s an -H switch which, if I’m reading this right, does a hard power off. I don’t use that one.

Altaro Dojo Forums
forums logo

Connect with fellow IT pros and master Hyper-V

Moderated by Microsoft MVPs

Useful Tools for CentOS Linux

Manipulating your CentOS environment from the VMConnect console will get tiring quickly. Here are some tools to make managing it much easier.

Text Editors

I already showed you nano. Just type nano at any prompt and press [Enter] and you’ll be in the nano screen. The toolbar at the bottom shows you what keypresses are necessary to do things, ex: [CTRL]+[X] to exit. Don’t forget to start it with sudo if you need to change protected files.

The remote text editing tool that I use is Notepad++. It is a little flaky — I sometimes get Access Denied errors with it that I don’t get in any other remote tool (setting it to Active mode seems to help a little). But, the price is hard to beat. If I run into real problems, I run things through my home folder. To connect Notepad++ to your host:

  1. In NPP, go to Plugins->NppFTP->Show NppFTP Window (only click if it’s not checked):
    NPP FTP Window Selector

    NPP FTP Window Selector

  2. The NppFTP window pane will appear at the far right. Click the icon that looks like a gear (which is, unfortunately, gray in color so it always looks disabled), then click Profile Settings:NPP FTP Profile Item
  3. In the Profile Settings window, click the Add New button. This will give you a small window where you can provide the name of the profile you’re creating. I normally use the name of the system.
    Add FTP Profile
  4. All the controls will now be activated.
    1. In the Hostname field, enter the DNS name or the IP address of the system you’re connecting to (if you’re reading straight through, you might not know this yet).
    2. Change the Connection type to SFTP.
    3. If you want, save the user name and password. I don’t know how secure this is. I usually enter my name and check Ask for password. If you don’t check that and don’t enter a password, it will assume a blank password.
      NPP FTP Profile

      NPP FTP Profile

  5. You can continue adding others or changing anything you like (I suggest going to the Transfers tab and setting the mode to Active). Click Close when ready.
  6. To connect, click the Connect icon which will now be blue-ish. It will have a drop-down list where you can choose the profile to connect to.NPP FTP Connect
  7. On your first connection, you’ll have to accept the host’s key:NPP Host Key
  8. If the connection is successful, you’ll attach to your home folder on the remote system. Double-clicking an item will attempt to load it. Using the save commands in NPP will save back to the Linux system directly.NPP FTP Directory

Remember that NPP is a Windows app, and as a Windows app, it wants to save files in Windows format (I know, weird, right?). Windows expects that files encoded in human-readable formats will end lines using a carriage-return character and a linefeed character (CRLF, commonly seen escaped as rn). Linux only uses the linefeed character (LF, commonly seen escaped as n). Some things in Linux will choke if they encounter a carriage return. Any time you’re using NPP to edit a Linux file, go to Edit -> EOL Conversion -> UNIX/OSX Format.

NPP EOL Conversion

NPP EOL Conversion

WinSCP

WinSCP allows you to move files back and forth between your Windows machine and a Linux system. It doesn’t have the weird permissions barriers that Notepad++ struggles with, but it also doesn’t have its editing powers.

  1. Download and install WinSCP. I prefer the Commander view but do as you like.
  2. In the Login dialog, highlight New Site and fill in the host’s information:WinSCP Profiles
  3. Click Save to keep the profile. It will present a small dialog asking you to customize how it’s saved. You can change the name or create folders or whatever you like.
  4. With the host entry highlighted, click Login. You’ll be prompted with a key on first connect:WinSCP Key
  5. Upon clicking Yes, you’ll be connected to the home folder. If you get a prompt that it’s listening on FTP, something went awry because the install process we followed does not include FTP. Check the information that you plugged in and try the connection again.
  6. WinSCP integrates with the taskbar for quick launching:WinSCP Taskbar

PuTTY

The biggest tool in your Linux-controlling arsenal will be PuTTY. This gem is an SSH client for Windows. SSH (secure shell) is how you remote control Linux systems. Use it instead of Hyper-V’s virtual machine connection. It’s really just a remote console. PuTTY, however, adds functionality on top of that. It can keep sessions and it gives you dead-simple copy/paste functionality. Highlight text, and it’s copied. Right-click the window, and it’s pasted at the cursor location.

  1. Download PuTTY. I use the installer package myself but do as you like.
  2. Type in the host name or IP address in that field.PuTTY Profiles
  3. PuTTY doesn’t let you save credentials. But, you can save the session. Type a name for it in the Saved Sessions field and then click Save to add it to the list. Clicking Load on an item, or double-clicking it, will populate the connection field with the saved details.
  4. Click Open when ready. On the first connection, you’ll have to accept the host key:PuTTY Key
  5. You’ll then have to enter your login name and password. Then you’ll be brought to the same type of screen that you saw in the console:PuTTY Console
  6. Right-click the title bar of PuTTY for a powerful menu. The menu items change based on the session status. I have restarted the operating system for the screenshot below so that you can see the Restart Session item. This allows you to quickly reconnect to a system that you dropped from… say, because you restarted it.PuTTY Menu
  7. PuTTY also has taskbar integration:PuTTY Taskbar
  8. When you’re all done, remember to use “exit” to end your session.

Your Journey Has Begun

From here, I leave you to explore your fresh new Linux environment. If you’d like, we have an article on using CentOS to host a Nagios environment for monitoring your Hyper-V environment.

Go to Original Article
Author: Eric Siron

3 Fundamental Capabilities of VM Groups You Can’t Ignore

In a previous post, I introduced you to VM groups in Hyper-V and demonstrated how to work with them using PowerShell. I’m still working with them to see how I will incorporate them into my everyday Hyper-V work, but I already know that I wish the cmdlets for managing groups worked a little differently. But that’s not a problem. I can create my own tooling around these commands and build a solution that works for me. Let me share what I’ve come up with so far.

1. Finding Groups

As I explained last time, you can have a VM group that contains a collection of virtual machines, or nested management groups. By default, Get-VMGroup will return all groups. Yes, you can filter by name but you can’t filter by group type. If I want to see only Management groups, I need to use a PowerShell expression like this:

This is not a complicated expression but it becomes tedious when I am repeatedly typing or modifying this command. This isn’t an issue in a script, but for everyday interactive work, it can be a bit much. My solution was to write a new command, Find-VMGroup, that works identically to Get-VMGroup except this version allows you to specify a group type.

Finding specific VM Group types with PowerShell

Your output might vary from the screenshot but I think you get the idea. The default is to return all groups, but then you might as well use Get-VMGroup. And because the group type is coded into the function, you can use tab complete to select a value.

Interested in getting the Find-VMGroup command? I have a section on how to install the module a little further down the page.

2. Expanding Groups

Perhaps the biggest issue (and even that might be a bit strong) I had with the VM Group command is that ultimately, what I really want are the members of the group. I want to be able to use groups to do something with all of the members of that group. And by members, I mean virtual machines. It doesn’t matter to me if the group is a VM Collection or Management Collection. Show me the virtual machines!

Again, this isn’t technically difficult.

 Getting VM Group members

If you haven’t figured out by now I prefer simple. Getting virtual machines from a management group requires even more steps. Once again, I wrote my own command called Expand-VMGroup.

Expanding a single VM group with a custom PowerShell command

The output has been customized a bit to provide a default, formatted view. There are in fact other properties you could work with.

Viewing all properties of an expanded VM group

Depending on the command, you might be able to pipe these results to another Hyper-V command. But I know that many of the Hyper-V cmdlets will take pipeline input by value. This allows you to pass a list of virtual machine names to a command. I added a parameter to Expand-VMGroup that will write just the virtual machine names to the pipeline as a list. Now I can run commands like this:

Piping Expand-VMGroup to another Hyper-V command

Again, the module containing this command can be found near the end of the article and can be installed using Install-Module

3. Starting and Stopping Groups

The main reason I want to use VM groups is to start and stop groups of virtual machines all at once. I could use Expand-VMGroup and pipe results to Start-VM or Stop-VM but I decided to make specific commands for starting and stopping all virtual machine members of a group. If a member of the group is already in the targeted state, it is skipped.

Starting members of a VM group

The third member of this group was already running so it was skipped. Now I’ll shut down the group.

Stopping members of a VM group

It may not seem like much but every little thing I can do to get more done with less typing and effort is worth my time. I’m using full parameter names and typing out more than I actually need to for the sake of clarity.

How Do I Get These Commands

Normally, I would show you code samples that you could use. But in this case, I think these commands are ready to use as-is. You can get the commands from my PSHyperVTools module which is free to install from the PowerShell Gallery.

If you haven’t installed anything before you might get a prompt to update the version of nuget. Go ahead and say yes.  You’ll also be prompted if you want to install from a non-trusted repository. You aren’t installing this on a mission-critical server so you should be OK. Once installed, you can use the commands that I’ve demonstrated. They should all have help and examples.

Getting help for Expand-VMGroup

The module is open source so if you’d like to review the code first or the README, jump over to https://github.com/jdhitsolutions/PSHyperV. There are a few other commands and features of the module that I hope to write about in a future article or two. But for now, I hope you’ll give these commands a spin and let me know what you think in the comments section below!

Go to Original Article
Author: Jeffery Hicks