Images

Free Script – Convert Legacy Teamed Hyper-V vSwitch to SET

<#

.SYNOPSIS

Converts LBFO+Virtual Switch combinations to switch-embedded teams.

.DESCRIPTION

Converts LBFO+Virtual Switch combinations to switch-embedded teams.

Performs the following steps:

1. Saves information about virtual switches and management OS vNICs (includes IPs, QoS settings, jumbo frame info, etc.)

2. If system belongs to a cluster, sets to maintenance mode

3. Disconnects attached virtual machine vNICs

4. Deletes the virtual switch

5. Deletes the LBFO team

6. Creates switch-embedded team

7. Recreates management OS vNICs

8. Reconnects previously-attached virtual machine vNICs

9. If system belongs to a cluster, ends maintenance mode

If you do not specify any overriding parameters, the new switch uses the same settings as the original LBFO+team.

.PARAMETER Id

The unique identifier(s) for the virtual switch(es) to convert.

.PARAMETER Name

The name(s) of the virtual switch(es) to convert.

.PARAMETER VMSwitch

The virtual switch(es) to convert.

.PARAMETER NewName

Name(s) to assign to the converted virtual switch(es). If blank, keeps the original name.

.PARAMETER UseDefaults

If specified, uses defaults for all values on the converted switch(es). If not specified, uses the same parameters as the original LBFO+switch or any manually-specified parameters.

.PARAMETER LoadBalancingAlgorithm

Sets the load balancing algorithm for the converted switch(es). If not specified, uses the same setting as the original LBFO+switch or the default if UseDefaults is set.

.PARAMETER MinimumBandwidthMode

Sets the desired QoS mode for the converted switch(es). If not specified, uses the same setting as the original LBFO+switch or the default if UseDefaults is set.

None: No network QoS

Absolute: minimum bandwidth values specify bits per second

Weight: minimum bandwidth values range from 1 to 100 and represent percentages

Default: use system default

WARNING: Changing the QoS mode may cause guest vNICS to fail to re-attach and may inhibit Live Migration. Use carefully if you have special QoS settings on guest virtual NICs.

.PARAMETER Notes

A note to associate with the converted switch(es). If not specified, uses the same setting as the original LBFO+switch or the default if UseDefaults is set.

.PARAMETER Force

If specified, bypasses confirmation.

.NOTES

Author: Eric Siron

Version 1.0, December 22, 2019

Released under MIT license

.EXAMPLE

ConvertTo-SwitchEmbeddedTeam

Converts all existing LBFO+switch combinations to switch embedded teams. Copies settings from original switches and management OS virtual NICs to new switch and vNICs.

.EXAMPLE

ConvertTo-SwitchEmbeddedTeam -Name vSwitch

Converts the LBFO+switch combination of the virtual switch named “vSwitch” to a switch embedded teams. Copies settings from original switch and management OS virtual NICs to new switch and vNICs.

.EXAMPLE

ConvertTo-SwitchEmbeddedTeam -Force

Converts all existing LBFO+team combinations without prompting.

.EXAMPLE

ConvertTo-SwitchEmbeddedTeam -NewName NewSET

If the system has one LBFO+switch, converts it to a switch-embedded team with the name “NewSET”.

If the system has multiple LBFO+switch combinations, fails due to mismatch (see next example).

.EXAMPLE

ConvertTo-SwitchEmbeddedTeam -NewName NewSET1, NewSET2

If the system has two LBFO+switches, converts them to switch-embedded team with the name “NewSET1” and “NEWSET2”, IN THE ORDER THAT GET-VMSWITCH RETRIEVES THEM.

.EXAMPLE

ConvertTo-SwitchEmbeddedTeam OldSwitch1, OldSwitch2 -NewName NewSET1, NewSET2

Converts the LBFO+switches named “OldSwitch1” and “OldSwitch2” to SETs named “NewSET1” and “NewSET2”, respectively.

.EXAMPLE

ConvertTo-SwitchEmbeddedTeam -UseDefaults

Converts all existing LBFO+switch combinations to switch embedded teams. Discards non-default settings for the switch and Hyper-V-related management OS vNICs. Keeps IP addresses and advanced settings (ex. jumbo frames).

.EXAMPLE

ConvertTo-SwitchEmbeddedTeam -MinimumBandwidthMode Weight

Converts all existing LBFO+switch combinations to switch embedded teams. Forces the new SET to use “Weight” for its minimum bandwidth mode.

WARNING: Changing the QoS mode may cause guest vNICS to fail to re-attach and may inhibit Live Migration. Use carefully if you have special QoS settings on guest virtual NICs.

.LINK

https://ejsiron.github.io/Posher-V/ConvertTo-SwitchEmbeddedTeam

#>

#Requires -RunAsAdministrator

#Requires -Module Hyper-V

#Requires -Version 5

[CmdletBinding(DefaultParameterSetName = ‘ByName’, ConfirmImpact = ‘High’)]

param(

[Parameter(Position = 1, ParameterSetName = ‘ByName’)][String[]]$Name = @(),

[Parameter(Position = 1, ParameterSetName = ‘ByID’, Mandatory = $true)][System.Guid[]]$Id,

[Parameter(Position = 1, ParameterSetName = ‘BySwitchObject’, Mandatory = $true)][Microsoft.HyperV.PowerShell.VMSwitch[]]$VMSwitch,

[Parameter(Position = 2)][String[]]$NewName = @(),

[Parameter()][Switch]$UseDefaults,

[Parameter()][Microsoft.HyperV.PowerShell.VMSwitchLoadBalancingAlgorithm]$LoadBalancingAlgorithm,

[Parameter()][Microsoft.HyperV.PowerShell.VMSwitchBandwidthMode]$MinimumBandwidthMode,

[Parameter()][String]$Notes = ,

[Parameter()][Switch]$Force

)

BEGIN

{

Set-StrictMode -Version Latest

$ErrorActionPreference = [System.Management.Automation.ActionPreference]::Stop

$IsClustered = $false

if(Get-CimInstance -Namespace root -ClassName __NAMESPACE -Filter ‘Name=”MSCluster”‘)

{

$IsClustered = [bool](Get-CimInstance -Namespace root/MSCluster -ClassName mscluster_cluster -ErrorAction SilentlyContinue)

$ClusterNode = Get-CimInstance -Namespace root/MSCluster -ClassName MSCluster_Node -Filter (‘Name=”{0}”‘ -f $env:COMPUTERNAME)

}

function Get-CimAdapterConfigFromVirtualAdapter

{

param(

[Parameter()][psobject]$VNIC

)

$VnicCim = Get-CimInstance -Namespace root/virtualization/v2 -ClassName Msvm_InternalEthernetPort -Filter (‘Name=”{0}”‘ -f $VNIC.AdapterId)

$VnicLanEndpoint1 = Get-CimAssociatedInstance -InputObject $VnicCim -ResultClassName Msvm_LANEndpoint

$NetAdapter = Get-CimInstance -ClassName Win32_NetworkAdapter -Filter (‘GUID=”{0}”‘ -f $VnicLANEndpoint1.Name.Substring(($VnicLANEndpoint1.Name.IndexOf(‘{‘))))

Get-CimAssociatedInstance -InputObject $NetAdapter -ResultClassName Win32_NetworkAdapterConfiguration

}

function Get-AdvancedSettingsFromAdapterConfig

{

param(

[Parameter()][psobject]$AdapterConfig

)

$MSFTAdapter = Get-CimInstance -Namespace root/StandardCimv2 -ClassName MSFT_NetAdapter -Filter (‘InterfaceIndex={0}’ -f $AdapterConfig.InterfaceIndex)

Get-CimAssociatedInstance -InputObject $MSFTAdapter -ResultClassName MSFT_NetAdapterAdvancedPropertySettingData

}

class NetAdapterDataPack

{

[System.String]$Name

[System.String]$MacAddress

[System.Int64]$MinimumBandwidthAbsolute = 0

[System.Int64]$MinimumBandwidthWeight = 0

[System.Int64]$MaximumBandwidth = 0

[System.Int32]$VlanId = 0

[Microsoft.Management.Infrastructure.CimInstance]$NetAdapterConfiguration

[Microsoft.Management.Infrastructure.CimInstance[]]$AdvancedProperties

[Microsoft.Management.Infrastructure.CimInstance[]]$IPAddresses

[Microsoft.Management.Infrastructure.CimInstance[]]$Gateways

NetAdapterDataPack([psobject]$VNIC)

{

$this.Name = $VNIC.Name

$this.MacAddress = $VNIC.MacAddress

if ($VNIC.BandwidthSetting -ne $null)

{

$this.MinimumBandwidthAbsolute = $VNIC.BandwidthSetting.MinimumBandwidthAbsolute

$this.MinimumBandwidthWeight = $VNIC.BandwidthSetting.MinimumBandwidthWeight

$this.MaximumBandwidth = $VNIC.BandwidthSetting.MaximumBandwidth

}

$this.VlanId = [System.Int32](Get-VMNetworkAdapterVlan -VMNetworkAdapter $VNIC).AccessVlanId

$this.NetAdapterConfiguration = Get-CimAdapterConfigFromVirtualAdapter -VNIC $VNIC

$this.AdvancedProperties = @(Get-AdvancedSettingsFromAdapterConfig -AdapterConfig $this.NetAdapterConfiguration  | Where-Object -FilterScript { (-not [String]::IsNullOrEmpty($_.DefaultRegistryValue)) -and (-not [String]::IsNullOrEmpty([string]($_.RegistryValue))) -and (-not [String]::IsNullOrEmpty($_.DisplayName)) -and ($_.RegistryValue[0] -ne $_.DefaultRegistryValue) })

# alternative to the below: use Get-NetIPAddress and Get-NetRoute, but they treat empty results as errors

$this.IPAddresses = @(Get-CimInstance -Namespace root/StandardCimv2 -ClassName MSFT_NetIPAddress -Filter (‘InterfaceIndex={0} AND PrefixOrigin=1’ -f $this.NetAdapterConfiguration.InterfaceIndex))

$this.Gateways = @(Get-CimInstance -Namespace root/StandardCimv2 -ClassName MSFT_NetRoute -Filter (‘InterfaceIndex={0} AND Protocol=3’ -f $this.NetAdapterConfiguration.InterfaceIndex)) # documentation says Protocol=2 for NetMgmt, testing shows otherwise

}

}

class SwitchDataPack

{

[System.String]$Name

[Microsoft.HyperV.PowerShell.VMSwitchBandwidthMode]$BandwidthReservationMode

[System.UInt64]$DefaultFlow

[System.String]$TeamName

[System.String[]]$TeamMembers

[System.UInt32]$LoadBalancingAlgorithm

[NetAdapterDataPack[]]$HostVNICs

SwitchDataPack(

[psobject]$VSwitch,

[Microsoft.Management.Infrastructure.CimInstance]$Team,

[System.Object[]]$VNICs

)

{

$this.Name = $VSwitch.Name

$this.BandwidthReservationMode = $VSwitch.BandwidthReservationMode

switch ($this.BandwidthReservationMode)

{

[Microsoft.HyperV.PowerShell.VMSwitchBandwidthMode]::Absolute { $this.DefaultFlow = $VSwitch.DefaultFlowMinimumBandwidthAbsolute }

[Microsoft.HyperV.PowerShell.VMSwitchBandwidthMode]::Weight { $this.DefaultFlow = $VSwitch.DefaultFlowMinimumBandwidthWeight }

default { $this.DefaultFlow = 0 }

}

$this.TeamName = $Team.Name

$this.TeamMembers = ((Get-CimAssociatedInstance -InputObject $Team -ResultClassName MSFT_NetLbfoTeamMember).Name)

$this.LoadBalancingAlgorithm = $Team.LoadBalancingAlgorithm

$this.HostVNICs = $VNICs

}

}

function Set-CimAdapterProperty

{

param(

[Parameter()][System.Object]$InputObject,

[Parameter()][System.String]$MethodName,

[Parameter()][System.Object]$Arguments,

[Parameter()][System.String]$Activity,

[Parameter()][System.String]$Url

)

Write-Verbose -Message $Activity

$CimResult = Invoke-CimMethod -InputObject $InputObject -MethodName $MethodName -Arguments $Arguments -ErrorAction Continue

if ($CimResult -and $CimResult.ReturnValue -gt 0 )

{

Write-Warning -Message (‘CIM error from operation: {0}. Consult {1} for error code {2}’ -f $Activity, $Url, $CimResult.ReturnValue) -WarningAction Continue

}

}

}

PROCESS

{

$VMSwitches = New-Object System.Collections.ArrayList

$SwitchRebuildData = New-Object System.Collections.ArrayList

switch ($PSCmdlet.ParameterSetName)

{

‘ByID’

{

$VMSwitches.AddRange($Id.ForEach( { Get-VMSwitch -Id $_ -ErrorAction SilentlyContinue }))

}

‘BySwitchObject’

{

$VMSwitches.AddRange($VMSwitch.ForEach( { $_ }))

}

default # ByName

{

$NameList = New-Object System.Collections.ArrayList

$NameList.AddRange($Name.ForEach( { $_.Trim() }))

if ($NameList.Contains() -or $NameList.Contains(‘*’))

{

$VMSwitches.AddRange(@(Get-VMSwitch -ErrorAction SilentlyContinue))

}

else

{

$VMSwitches.AddRange($NameList.ForEach( { Get-VMSwitch -Name $_ -ErrorAction SilentlyContinue }))

}

}

}

if ($VMSwitches.Count)

{

$VMSwitches = @(Select-Object -InputObject $VMSwitches -Unique)

}

else

{

throw(‘No virtual switches match the provided criteria’)

}

Write-Progress -Activity ‘Pre-flight’ -Status ‘Verifying operating system version’ -PercentComplete 5 -Id 1

Write-Verbose -Message ‘Verifying operating system version’

$OSVersion = [System.Version]::Parse((Get-CimInstance -ClassName Win32_OperatingSystem).Version)

if ($OSVersion.Major -lt 10)

{

throw(‘Switch-embedded teams not supported on host operating system versions before 2016’)

}

Write-Progress -Activity ‘Pre-flight’ -Status ‘Loading virtual VMswitches’ -PercentComplete 15 -Id 1

if ($NewName.Count -gt 0 -and $NewName.Count -ne $VMSwitches.Count)

{

$SwitchNameMismatchMessage = ‘Switch count ({0}) does not match NewName count ({1}).’ -f $VMSwitches.Count, $NewName.Count

if ($NewName.Count -lt $VMSwitches.Count)

{

$SwitchNameMismatchMessage += ‘ If you wish to rename some VMswitches but not others, specify an empty string for the VMswitches to leave.’

}

throw($SwitchNameMismatchMessage)

}

Write-Progress -Activity ‘Pre-flight’ -Status ‘Validating virtual switch configurations’ -PercentComplete 25 -Id 1

Write-Verbose -Message ‘Validating virtual switches’

foreach ($VSwitch in $VMSwitches)

{

try

{

Write-Progress -Activity (‘Validating virtual switch “{0}”‘ -f $VSwitch.Name) -Status ‘Switch is external’ -PercentComplete 25 -ParentId 1

Write-Verbose -Message (‘Verifying that switch “{0}” is external’ -f $VSwitch.Name)

if ($VSwitch.SwitchType -ne [Microsoft.HyperV.PowerShell.VMSwitchType]::External)

{

Write-Warning -Message (‘Switch “{0}” is not external, skipping’ -f $VSwitch.Name)

continue

}

Write-Progress -Activity (‘Validating virtual switch “{0}”‘ -f $VSwitch.Name) -Status ‘Switch is not a SET’ -PercentComplete 50 -ParentId 1

Write-Verbose -Message (‘Verifying that switch “{0}” is not already a SET’ -f $VSwitch.Name)

if ($VSwitch.EmbeddedTeamingEnabled)

{

Write-Warning -Message (‘Switch “{0}” already uses SET, skipping’ -f $VSwitch.Name)

continue

}

Write-Progress -Activity (‘Validating virtual switch “{0}”‘ -f $VSwitch.Name) -Status ‘Switch uses LBFO’ -PercentComplete 75 -ParentId 1

Write-Verbose -Message (‘Verifying that switch “{0}” uses an LBFO team’ -f $VSwitch.Name)

$TeamAdapter = Get-CimInstance -Namespace root/StandardCimv2 -ClassName MSFT_NetLbfoTeamNic -Filter (‘InterfaceDescription=”{0}”‘ -f $VSwitch.NetAdapterInterfaceDescription)

if ($TeamAdapter -eq $null)

{

Write-Warning -Message (‘Switch “{0}” does not use a team, skipping’ -f $VSwitch.Name)

continue

}

if ($TeamAdapter.VlanID)

{

Write-Warning -Message (‘Switch “{0}” is bound to a team NIC with a VLAN assignment, skipping’ -f $VSwitch.Name)

continue

}

}

catch

{

Write-Warning -Message (‘Switch “{0}” failed validation, skipping. Error: {1}’ -f $VSwitch.Name, $_.Exception.Message)

continue

}

finally

{

Write-Progress -Activity (‘Validating virtual switch “{0}”‘ -f $VSwitch.Name) -Completed -ParentId 1

}

Write-Progress -Activity (‘Loading information from virtual switch “{0}”‘ -f $VSwitch.Name) -Status ‘Team NIC’ -PercentComplete 25 -ParentId 1

Write-Verbose -Message ‘Loading team’

$Team = Get-CimAssociatedInstance -InputObject $TeamAdapter -ResultClassName MSFT_NetLbfoTeam

Write-Progress -Activity (‘Loading information from virtual switch “{0}”‘ -f $VSwitch.Name) -Status ‘Host virtual adapters’ -PercentComplete 50 -ParentId 1

Write-Verbose -Message ‘Loading management adapters connected to this switch’

$HostVNICs = Get-VMNetworkAdapter -ManagementOS -SwitchName $VSwitch.Name

Write-Verbose -Message ‘Compiling virtual switch and management OS virtual NIC information’

Write-Progress -Activity (‘Loading information from virtual switch “{0}”‘ -f $VSwitch.Name) -Status ‘Storing vSwitch data’ -PercentComplete 75 -ParentId 1

$OutNull = $SwitchRebuildData.Add([SwitchDataPack]::new($VSwitch, $Team, ($HostVNICs.ForEach({ [NetAdapterDataPack]::new($_) }))))

Write-Progress -Activity (‘Loading information from virtual switch “{0}”‘ -f $VSwitch.Name) -Completed

}

Write-Progress -Activity ‘Pre-flight’ -Status ‘Cleaning up’ -PercentComplete 99 -ParentId 1

Write-Verbose -Message ‘Clearing loop variables’

$VSwitch = $Team = $TeamAdapter = $HostVNICs = $null

Write-Progress -Activity ‘Pre-flight’ -Completed

if($SwitchRebuildData.Count -eq 0)

{

Write-Warning -Message ‘No eligible virtual switches found.’

exit 1

}

$SwitchMark = 0

$SwitchCounter = 1

$SwitchStep = 1 / $SwitchRebuildData.Count * 100

$ClusterNodeRunning = $IsClustered

foreach ($OldSwitchData in $SwitchRebuildData)

{

$SwitchName = $OldSwitchData.Name

if($NewName.Count -gt 0)

{

$SwitchName = $NewName[($SwitchCounter – 1)]

}

Write-Progress -Activity ‘Rebuilding switches’ -Status (‘Processing virtual switch {0} ({1}/{2})’ -f $SwitchName, $SwitchCounter, $SwitchRebuildData.Count) -PercentComplete $SwitchMark -Id 1

$SwitchCounter++

$SwitchMark += $SwitchStep

$ShouldProcessTargetText = ‘Virtual switch {0}’ -f $OldSwitchData.Name

$ShouldProcessOperation = ‘Disconnect all virtual adapters, remove team and switch, build switch-embedded team, replace management OS vNICs, reconnect virtual adapters’

if ($Force -or $PSCmdlet.ShouldProcess($ShouldProcessTargetText , $ShouldProcessOperation))

{

if($ClusterNodeRunning)

{

Write-Verbose -Message ‘Draining cluster node’

Write-Progress -Activity ‘Draining cluster node’ -Status ‘Draining’

$OutNull = Invoke-CimMethod -InputObject $ClusterNode -MethodName ‘Pause’ -Arguments @{DrainType=2;TargetNode=}

while($ClusterNodeRunning)

{

Start-Sleep -Seconds 1

$ClusterNode = Get-CimInstance -InputObject $ClusterNode

switch($ClusterNode.NodeDrainStatus)

{

0 { Write-Error -Message ‘Failed to initiate cluster node drain’ }

2 { $ClusterNodeRunning = $false }

3 { Write-Error -Message ‘Failed to drain cluster roles’ }

# 1 is all that’s left, will cause loop to continue

}

}

}

Write-Progress -Activity ‘Draining cluster node’ -Completed

$SwitchProgressParams = @{Activity = (‘Processing switch {0}’ -f $OldSwitchData.Name); ParentId = 1; Id=2 }

Write-Verbose -Message ‘Disconnecting virtual machine adapters’

Write-Progress @SwitchProgressParams -Status ‘Disconnecting virtual machine adapters’ -PercentComplete 10

Write-Verbose -Message ‘Loading VM adapters connected to this switch’

$GuestVNICs = Get-VMNetworkAdapter -VMName * | Where-Object -Property SwitchName -EQ $OldSwitchData.Name

if($GuestVNICs)

{

Disconnect-VMNetworkAdapter -VMNetworkAdapter $GuestVNICs

}

Start-Sleep -Milliseconds 250 # seems to prefer a bit of rest time between removal commands

if($OldSwitchData.HostVNICs)

{

Write-Verbose -Message ‘Removing management vNICs’

Write-Progress @SwitchProgressParams -Status ‘Removing management vNICs’ -PercentComplete 20

Remove-VMNetworkAdapter -ManagementOS

}

Start-Sleep -Milliseconds 250 # seems to prefer a bit of rest time between removal commands

Write-Verbose -Message ‘Removing virtual switch’

Write-Progress @SwitchProgressParams -Status ‘Removing virtual switch’ -PercentComplete 30

Remove-VMSwitch -Name $OldSwitchData.Name -Force

Start-Sleep -Milliseconds 250 # seems to prefer a bit of rest time between removal commands

Write-Verbose -Message ‘Removing team’

Write-Progress @SwitchProgressParams -Status ‘Removing team’ -PercentComplete 40

Remove-NetLbfoTeam -Name $OldSwitchData.TeamName -Confirm:$false

Start-Sleep -Milliseconds 250 # seems to prefer a bit of rest time between removal commands

Write-Verbose -Message ‘Creating SET’

Write-Progress @SwitchProgressParams -Status ‘Creating SET’ -PercentComplete 50

$SetLoadBalancingAlgorithm = $null

if (-not $UseDefaults)

{

if ($OldSwitchData.LoadBalancingAlgorithm -eq 5)

{

$SetLoadBalancingAlgorithm = [Microsoft.HyperV.PowerShell.VMSwitchLoadBalancingAlgorithm]::Dynamic # 5 is dynamic; https://docs.microsoft.com/en-us/previous-versions/windows/desktop/ndisimplatcimprov/msft-netlbfoteam

}

else # SET does not have LBFO’s hash options for load-balancing; assume that the original switch used a non-Dynamic mode for a reason

{

$SetLoadBalancingAlgorithm = [Microsoft.HyperV.PowerShell.VMSwitchLoadBalancingAlgorithm]::HyperVPort

}

}

if ($LoadBalancingAlgorithm)

{

$SetLoadBalancingAlgorithm = $LoadBalancingAlgorithm

}

$NewMinimumBandwidthMode = $null

if(-not $UseDefaults)

{

$NewMinimumBandwidthMode = $OldSwitchData.BandwidthReservationMode

}

if ($MinimumBandwidthMode)

{

$NewMinimumBandwidthMode = $MinimumBandwidthMode

}

$NewSwitchParams = @{NetAdapterName=$OldSwitchData.TeamMembers}

if($NewMinimumBandwidthMode)

{

$NewSwitchParams.Add(‘MinimumBandwidthMode’, $NewMinimumBandwidthMode)

}

try

{

$NewSwitch = New-VMSwitch @NewSwitchParams -Name $SwitchName -AllowManagementOS $false -EnableEmbeddedTeaming $true -Notes $Notes

}

catch

{

Write-Error -Message (‘Unable to create virtual switch {0}: {1}’ -f $SwitchName, $_.Exception.Message) -ErrorAction Continue

continue

}

if($SetLoadBalancingAlgorithm)

{

Write-Verbose -Message (‘Setting load balancing mode to {0}’ -f $SetLoadBalancingAlgorithm)

Write-Progress @SwitchProgressParams -Status ‘Setting SET load balancing algorithm’ -PercentComplete 60

Set-VMSwitchTeam -Name $NewSwitch.Name -LoadBalancingAlgorithm $SetLoadBalancingAlgorithm

}

$VNICCounter = 0

foreach($VNIC in $OldSwitchData.HostVNICs)

{

$VNICCounter++

Write-Progress @SwitchProgressParams -Status (‘Configuring management OS vNIC {0}/{1}’ -f $VNICCounter, $OldSwitchData.HostVNICs.Count) -PercentComplete 70

$VNICProgressParams = @{Activity = (‘Processing VNIC {0}’ -f $VNIC.Name); ParentId = 2; Id=3 }

Write-Verbose -Message (‘Adding virtual adapter “{0}” to switch “{1}”‘ -f $VNIC.Name, $NewSwitch.Name)

Write-Progress @VNICProgressParams -Status ‘Adding vNIC’ -PercentComplete 10

$NewNic = Add-VMNetworkAdapter -SwitchName $NewSwitch.Name -ManagementOS -Name $VNIC.Name -StaticMacAddress $VNIC.MacAddress -Passthru

$SetNicParams = @{ }

if ((-not $UseDefaults) -and $VNIC.MinimumBandwidthAbsolute -and $NewSwitch.BandwidthReservationMode -eq [Microsoft.HyperV.PowerShell.VMSwitchBandwidthMode]::Absolute)

{

$SetNicParams.Add(‘MinimumBandwidthAbsolute’, $VNIC.MinimumBandwidthAbsolute)

}

elseif ((-not $UseDefaults) -and $VNIC.MinimumBandwidthWeight -and $NewSwitch.BandwidthReservationMode -eq [Microsoft.HyperV.PowerShell.VMSwitchBandwidthMode]::Weight)

{

$SetNicParams.Add(‘MinimumBandwidthWeight’, $VNIC.MinimumBandwidthWeight)

}

if ($VNIC.MaximumBandwidth)

{

$SetNicParams.Add(‘MaximumBandwidth’, $VNIC.MaximumBandwidth)

}

Write-Verbose -Message (‘Setting properties on virtual adapter “{0}” on switch “{1}”‘ -f $VNIC.Name, $NewSwitch.Name)

Write-Progress @VNICProgressParams -Status ‘Setting vNIC parameters’ -PercentComplete 20

Set-VMNetworkAdapter -VMNetworkAdapter $NewNic @SetNicParams -ErrorAction Continue

if($VNIC.VlanId)

{

Write-Progress @VNICProgressParams -Status ‘Setting VLAN ID’ -PercentComplete 30

Write-Verbose -Message (‘Setting VLAN ID on virtual adapter “{0}” on switch “{1}”‘ -f $VNIC.Name, $NewSwitch.Name)

Set-VMNetworkAdapterVlan -VMNetworkAdapter $NewNic -Access -VlanId $VNIC.VlanId

}

$NewNicConfig = Get-CimAdapterConfigFromVirtualAdapter -VNIC $NewNic

if ($VNIC.IPAddresses.Count -gt 0)

{

Write-Progress @VNICProgressParams -Status ‘Setting IP and subnet masks’ -PercentComplete 40

foreach($IPAddressData in $VNIC.IPAddresses)

{

Write-Verbose -Message (‘Setting IP address {0}’ -f $IPAddressData.IPAddress)

$OutNull = New-NetIPAddress -InterfaceIndex $NewNicConfig.InterfaceIndex -IPAddress $IPAddressData.IPAddress -PrefixLength $IPAddressData.PrefixLength -SkipAsSource $IPAddressData.SkipAsSource -ErrorAction Continue

}

Write-Progress @VNICProgressParams -Status ‘Setting DNS registration behavior’ -PercentComplete 41

Set-CimAdapterProperty -InputObject $NewNicConfig -MethodName ‘SetDynamicDNSRegistration’ `

-Arguments @{ FullDNSRegistrationEnabled = $VNIC.NetAdapterConfiguration.FullDNSRegistrationEnabled; DomainDNSRegistrationEnabled = $VNIC.NetAdapterConfiguration.DomainDNSRegistrationEnabled } `

-Activity (‘Setting DNS registration behavior (dynamic registration: {0}, with domain name: {1}) on {2}’ -f $VNIC.NetAdapterConfiguration.FullDNSRegistrationEnabled, $VNIC.NetAdapterConfiguration.DomainDNSRegistrationEnabled, $NewNic.Name) `

-Url ‘https://docs.microsoft.com/en-us/windows/win32/cimwin32prov/setdynamicdnsregistration-method-in-class-win32-networkadapterconfiguration’

foreach($GatewayData in $VNIC.Gateways)

{

Write-Verbose -Message (‘Setting gateway address {0}’ -f $GatewayData.NextHop)

$OutNull = New-NetRoute -InterfaceIndex $NewNicConfig.InterfaceIndex -DestinationPrefix $GatewayData.DestinationPrefix -NextHop $GatewayData.NextHop -RouteMetric $GatewayData.RouteMetric

}

Write-Progress @VNICProgressParams -Status ‘Setting gateways’ -PercentComplete 42

if ($VNIC.NetAdapterConfiguration.DefaultIPGateway)

{

Set-CimAdapterProperty -InputObject $NewNicConfig -MethodName ‘SetGateways’ `

-Arguments @{ DefaultIPGateway = $VNIC.NetAdapterConfiguration.DefaultIPGateway } `

-Activity (‘Setting gateways {0} on {1}’  -f $VNIC.NetAdapterConfiguration.DefaultIPGateway, $NewNic.Name) `

-Url ‘https://docs.microsoft.com/en-us/windows/win32/cimwin32prov/setgateways-method-in-class-win32-networkadapterconfiguration’

}

if($VNIC.NetAdapterConfiguration.DNSDomain)

{

Write-Progress @VNICProgressParams -Status ‘Setting DNS domain’ -PercentComplete 43

Set-CimAdapterProperty -InputObject $NewNicConfig -MethodName ‘SetDNSDomain’ `

-Arguments @{ DNSDomain = $VNIC.NetAdapterConfiguration.DNSDomain } `

-Activity (‘Setting DNS domain {0} on {1}’ -f $VNIC.NetAdapterConfiguration.DNSDomain, $NewNic.Name) `

-Url ‘https://docs.microsoft.com/en-us/windows/win32/cimwin32prov/setdnsdomain-method-in-class-win32-networkadapterconfiguration’

}

if ($VNIC.NetAdapterConfiguration.DNSServerSearchOrder)

{

Write-Progress @VNICProgressParams -Status ‘Setting DNS servers’ -PercentComplete 44

Set-CimAdapterProperty -InputObject $NewNicConfig -MethodName ‘SetDNSServerSearchOrder’ `

-Arguments @{ DNSServerSearchOrder = $VNIC.NetAdapterConfiguration.DNSServerSearchOrder } `

-Activity (‘setting DNS servers {0} on {1}’ -f [String]::Join(‘, ‘, $VNIC.NetAdapterConfiguration.DNSServerSearchOrder), $NewNic.Name) `

-Url ‘https://docs.microsoft.com/en-us/windows/win32/cimwin32prov/setdnsserversearchorder-method-in-class-win32-networkadapterconfiguration’

}

if($VNIC.NetAdapterConfiguration.WINSPrimaryServer)

{

Write-Progress @VNICProgressParams -Status ‘Setting WINS servers’ -PercentComplete 45

Set-CimAdapterProperty -InputObject $NewNicConfig -MethodName ‘SetWINSServer’ `

-Arguments @{ WINSPrimaryServer = $VNIC.NetAdapterConfiguration.WINSPrimaryServer; WINSSecondaryServer = $VNIC.NetAdapterConfiguration.WINSSecondaryServer }

-Activity (‘Setting WINS servers {0} on {1}’ -f ([String]::Join(‘, ‘, $VNIC.NetAdapterConfiguration.WINSPrimaryServer, $VNIC.NetAdapterConfiguration.WINSSecondaryServer)), $NewNic.Name) `

-Url ‘https://docs.microsoft.com/en-us/windows/win32/cimwin32prov/setwinsserver-method-in-class-win32-networkadapterconfiguration’

}

}

if($VNIC.NetAdapterConfiguration.TcpipNetbiosOptions) # defaults to 0

{

Write-Progress @VNICProgressParams -Status ‘Setting NetBIOS over TCP/IP behavior’ -PercentComplete 50

Set-CimAdapterProperty -InputObject $NewNicConfig -MethodName ‘SetTcpipNetbios’ `

-Arguments @{ TcpipNetbiosOptions = $VNIC.NetAdapterConfiguration.TcpipNetbiosOptions } `

-Activity (‘Setting NetBIOS over TCP/IP behavior on {0} to {1}’ -f $NewNic.Name, $VNIC.NetAdapterConfiguration.TcpipNetbiosOptions) `

-Url ‘https://docs.microsoft.com/en-us/windows/win32/cimwin32prov/settcpipnetbios-method-in-class-win32-networkadapterconfiguration’

}

Write-Progress @VNICProgressParams -Status ‘Applying advanced properties’ -PercentComplete 60

$NewNicAdvancedProperties = Get-AdvancedSettingsFromAdapterConfig -AdapterConfig $NewNicConfig

$PropertiesCounter = 0

$PropertyProgressParams = @{Activity = ‘Processing VNIC advanced properties’; ParentId = 3; Id=4 }

foreach($SourceAdvancedProperty in $VNIC.AdvancedProperties)

{

foreach($NewNicAdvancedProperty in $NewNicAdvancedProperties)

{

if($SourceAdvancedProperty.ElementName -eq $NewNicAdvancedProperty.ElementName)

{

$PropertiesCounter++

Write-Progress @PropertyProgressParams -PercentComplete ($PropertiesCounter / $VNIC.AdvancedProperties.Count * 100) -Status (‘Applying property {0}’ -f $SourceAdvancedProperty.DisplayName)

Write-Verbose (‘Setting advanced property {0} to {1} on {2}’ -f $SourceAdvancedProperty.DisplayName, $SourceAdvancedProperty.DisplayValue, $VNIC.Name)

$NewNicAdvancedProperty.RegistryValue = $SourceAdvancedProperty.RegistryValue

Set-CimInstance -InputObject $NewNicAdvancedProperty -ErrorAction Continue

}

}

}

}

Write-Progress @VNICProgressParams -Completed

Write-Progress @SwitchProgressParams -Status ‘Reconnecting guest vNICs’ -PercentComplete 80

if($GuestVNICs)

{

foreach ($GuestVNIC in $GuestVNICs)

{

try

{

Connect-VMNetworkAdapter -VMNetworkAdapter $GuestVNIC -VMSwitch $NewSwitch

}

catch

{

Write-Error -Message (‘Failed to connect virtual adapter “{0}” with MAC address “{1}” to virtual switch “{2}”: {3}’ -f $GuestVNIC.Name, $GuestVNIC.MacAddress, $NewSwitch.Name, $_.Exception.Message) -ErrorAction Continue

}

}

}

Write-Progress @SwitchProgressParams -Completed

}

}

if($IsClustered)

{

Write-Verbose -Message ‘Resuming cluster node’

$OutNull = Invoke-CimMethod -InputObject $ClusterNode -MethodName ‘Resume’ -Arguments @{FailbackType=1}

}

}

Go to Original Article
Author: Eric Siron

How to Use Failover Clusters with 3rd Party Replication

In this second post, we will review the different types of replication options and give you guidance on what you need to ask your storage vendor if you are considering a third-party storage replication solution.

If you want to set up a resilient disaster recovery (DR) solution for Windows Server and Hyper-V, you’ll need to understand how to configure a multi-site cluster as this also provides you with local high-availability. In the first post in this series, you learned about the best practices for planning the location, node count, quorum configuration and hardware setup. The next critical decision you have to make is how to maintain identical copies of your data at both sites, so that the same information is available to your applications, VMs, and users.

Multi-Site Cluster Storage Planning

All Windows Server Failover Clusters require some type of shared storage to allow an application to run on any host and access the same data. Multi-site clusters behave the same way, but they require multiple independent storage arrays at each site, with the data replicated between them. The data for the clustered application or virtual machine (VM) on each site should use its own local storage array, or it could have significant latency if each disk IO operation had to go to the other location.

If you are running Hyper-V VMs on your multi-site cluster, you may wish to use Cluster Shared Volumes (CSV) disks. This type of clustered storage configuration is optimized for Hyper-V and allows multiple virtual hard disks (VHDs) to reside on the same disk while allowing the VMs to run on different nodes. The challenge when using CSV in a multi-site cluster is that the VMs must make sure that they are always writing to their disk in their site, and not the replicated copy. Most storage providers offer CSV-aware solutions, and you must make sure that they explicitly support multi-site clustering scenarios. Often the vendors will force writes at the primary site by making the CSV disk at the second site read-only, to ensure that the correct disks are always being used.

Understanding Synchronous and Asynchronous Replication

As you progress in planning your multi-site cluster you will have to select how your data is copied between sites, either synchronously or asynchronously. With asynchronous replication, the application will write to the clustered disk at the primary site, then at regular intervals, the changes will be copied to the disk at the secondary site. This usually happens every few minutes or hours, but if a site fails between replication cycles, then any data from the primary site which has not yet been copied to the secondary site will be lost. This is the recommended configuration for applications that can sustain some amount of data loss, and this generally does not impose any restrictions on the distance between sites. The following image shows the asynchronous replication cycle.

Asynchronous Replication in a Multi-Site Cluster

Asynchronous Replication in a Multi-Site Cluster

With synchronous replication, whenever a disk write command occurs on the primary site, it is then copied to the secondary site, and an acknowledgment is returned to both the primary and secondary storage arrays before that write is committed. Synchronous replication ensures consistency between both sites and avoids data loss in the event that there is a crash between a replication cycle. The challenge of writing to two sets of disks in different locations is that the physical distance between sites must be close or it can affect the performance of the application. Even with a high-bandwidth and low-latency connection, synchronous replication is usually recommended only for critical applications that cannot sustain any data loss, and this should be considered with the location of your secondary site.  The following image shows the asynchronous replication cycle.

Synchronous Replication in a Multi-Site Cluster

Synchronous Replication in a Multi-Site Cluster

As you continue to evaluate different storage vendors, you may also want to assess the granularity of their replication solution. Most of the traditional storage vendors will replicate data at the block-level, which means that they track specific segments of data on the disk which have changed since the last replication. This is usually fast and works well with larger files (like virtual hard disks or databases), as only blocks that have changed need to be copied to the secondary site. Some examples of integrated block-level solutions include HP’s Cluster Extension, Dell/EMC’s Cluster Enabler (SRDF/CE for DMX, RecoverPoint for CLARiiON), Hitachi’s Storage Cluster (HSC), NetApp’s MetroCluster, and IBM’s Storage System.

There are also some storage vendors which provide a file-based replication solution that can run on top of commodity storage hardware. These providers will keep track of individual files which have changed, and only copy those. They are often less efficient than the block-level replication solutions as larger chunks of data (full files) must be copied, however, the total cost of ownership can be much less. A few of the top file-level vendors who support multi-site clusters include Symantec’s Storage Foundation High Availability, Sanbolic’s Melio, SIOS’s Datakeeper Cluster Edition, and Vision Solutions’ Double-Take Availability.

The final class of replication providers will abstract the underlying sets of storage arrays at each site. This software manages disk access and redirection to the correct location. The more popular solutions include EMC’s VPLEX, FalconStor’s Continuous Data Protector and DataCore’s SANsymphony. Almost all of the block-level, file-level, and appliance-level providers are compatible with CSV disks, but it is best to check that they support the latest version of Windows Server if you are planning a fresh deployment.

By now you should have a good understanding of how you plan to configure your multi-site cluster and your replication requirements. Now you can plan your backup and recovery process. Even though the application’s data is being copied to the secondary site, which is similar to a backup, it does not replace the real thing. This is because if the VM (VHD) on one site becomes corrupted, that same error is likely going to be copied to the secondary site. You should still regularly back up any production workloads running at either site.  This means that you need to deploy your cluster-aware backup software and agents in both locations and ensure that they are regularly taking backups. The backups should also be stored independently at both sites so that they can be recovered from either location if one datacenter becomes unavailable. Testing recovery from both sites is strongly recommended. Altaro’s Hyper-V Backup is a great solution for multi-site clusters and is CSV-aware, ensuring that your disaster recovery solution is resilient to all types of disasters.

If you are looking for a more affordable multi-site cluster replication solution, only have a single datacenter, or your storage provider does not support these scenarios, Microsoft offers a few solutions. This includes Hyper-V Replica and Azure Site Recovery, and we’ll explore these disaster recovery options and how they integrate with Windows Server Failover Clustering in the third part of this blog series.

Let us know if you have any questions in the comments form below!


Go to Original Article
Author: Symon Perriman

How to Customize Hyper-V VMs using PowerShell

In this article, we’ll be covering all the main points for deploying and modifying virtual machines in Hyper-V using PowerShell.

You can create a Hyper-V virtual machine easily using a number of tools. The “easy” tool, Hyper-V Manager’s New Virtual Machine Wizard (and its near-equivalent in Failover Cluster Manager), creates only a basic system. It has a number of defaults that you might not like. If you forget to change something, then you might have to schedule downtime later to correct it. You have other choices for VM creation. Among these, PowerShell gives you the greatest granularity and level of control. We’ll take a tour of the capability at your fingertips. After the tour, we will present some ways that you can leverage this knowledge to make VM creation simpler, quicker, and less error-prone than any of the GUI methods.

Cmdlets Related to Virtual Machine Creation

Of course, you create virtual machines using New-VM. But, like the wizards, it has limits. You will use other cmdlets to finesse the final product into exactly what you need.

The above cmdlets encompass the features needed for a majority of cases. If you need something else, you can start with the complete list of Hyper-V-related cmdlets.

Note: This article was written using the cmdlets as found on Windows Server 2019.

Comparing PowerShell to the Wizards for Virtual Machine Creation

PowerShell makes some things a lot quicker than the GUI tools. That doesn’t always apply to virtual machine creation. You will need to consider the overall level of effort before choosing either approach. Start with an understanding of what each approach asks of you.

The GUI Wizard Outcome

Virtual machine configuration points you can control when creating a virtual machine using the wizard:

  • Name
  • Virtual machine generation
  • Storage location
  • Virtual switch connection
  • The startup memory quantity and whether the VM uses Dynamic Memory
  • Attach or create one VHD or VHDX to the default location
  • Initial boot configuration

If you used the wizard from within Failover Cluster Manager, it will have also added the VM to the cluster’s highly-available roles.

The wizard does fairly well at hitting the major configuration points for a new virtual machine. It does miss some things, though. Most notably, you only get vCPU. Once you finish using the wizard to create a virtual machine, you must then work through the VM’s properties to fix up anything else.

The Windows Admin Center Outcome

Virtual machine configuration points you can control when creating a virtual machine using Windows Admin Center:

  • Name
  • Virtual machine generation
  • The system that will host the VM (if you started VM creation from the cluster panel)
  • Base storage location — one override for both the VM’s configuration files and the VHDX(s)
  • Virtual switch connection
  • The number of virtual processors
  • The startup memory quantity, whether the VM uses Dynamic Memory, and DM’s minimum and maximum values
  • Attach or create one or more VHDs or VHDXs
  • Initial boot configuration

If you used the wizard from within Failover Cluster Manager, it will have also added the VM to the cluster’s highly-available roles.

Windows Admin Center does more than the MMC wizards, making it more likely that you can immediately use the created virtual machine. It does miss a few common configuration points, such as VLAN assignment and startup/shutdown settings.

The PowerShell Outcome

As for PowerShell, we have nothing missed on the outcome. It can do everything. Some parts take a bit more effort. You will often need two or more cmdlets to fully create a VM as desired. Before we demonstrate them, we need to cover the difference between PowerShell and the above GUI methods.

Why Should I Use PowerShell Instead of the GUI to Create a Virtual Machine?

So, if PowerShell takes more work, why would you do it? Well, if you have to create only one or two VMs, maybe you wouldn’t. In a lot of cases, it makes the most sense:

  • One-stop location for the creation of VMs with advanced settings
  • Repeatable VM creation steps
  • More precise control over the creation
  • Access to features and settings unavailable in the GUI

For single VM creation, PowerShell saves you from some double-work and usage of multiple interfaces. You don’t have to run a wizard to create the VM and then dig through property sheets to make changes. You also don’t have to start in a wizard and then switch to PowerShell if you want to change a setting not included in the GUI.

Understanding Permissions for Hyper-V’s Cmdlets

If you run these cmdlets locally on the Hyper-V host as presented in this article, then you must belong to the local Administrators group. I have personally never used the “Hyper-V Administrators” group, ever, just on principle. A Hyper-V host should not do anything else, and I have not personally encountered a situation where it made sense to separate host administration from Hyper-V administration. I have heard from others that membership in the “Hyper-V Administrators” group does not grant the powers that they expect. Your mileage may vary.

Additional Requirements for Remote Storage

If the storage location for your VMs or virtual hard disks resides on a remote system (SMB), then you have additional concerns that require you to understand the security model of Hyper-V’s helper services. Everything that you do with the Hyper-V cmdlets (and GUI tools) accesses a central CIM-based API. These APIs do their work by a two-step process:

  • The Hyper-V host verifies that your account has permission to access the requested API
  • Service on the Hyper-V host carries out the requested action within its security context

By default, these services run as the “Local System” account. They present themselves to other entities on the network as the Hyper-V host’s computer account, not your account. Changing the account that runs the services places you in an unsupported configuration. Just understand that they run under that account and act accordingly.

The Hyper-V host’s computer account must have at least Modify the permission on the remote NTFS/ReFS file system and at least Change on the SMB share.

Additional Requirements for Remote Sessions

If you run these cmdlets remotely, whether explicitly (inside a PSSession) or implicitly (using the ComputerName) parameter, and you do anything that depends on SMB storage (a second hop), then you must configure delegation.

The security points of a delegated operation:

  • The account that you use to run the cmdlet must have administrator privileges on the Hyper-V host
  • The Hyper-V host must allow delegation of credentials to the target location
  • You must configure the target SMB share as indicated in the last sentence of the preceding section

These rules apply whether you allow the commands to use the host’s configured default locations or if you override.

If you need help with the delegation part, we have a script for delegation.

Shortest Possible Use of New-VM

You can run New-VM with only one required parameter:

This creates a virtual machine on the local host with the following characteristics:

  • Name: “demovm”
  • Generation 1
  • 1 vCPU
  • No virtual hard disk
  • Virtual CD/DVD attached to virtual IDE controller 1, location 0
  • Synthetic adapter, not connected to a virtual switch
  • Boots from CD (only bootable device on such a system)

I do not use the cmdlet this way, ever. I personally create only Generation 2 machines now unless I have some overriding reason. You can change all other options after creation. Also, I tend to connect to the virtual switch right away, as it saves me a cmdlet later.

I showed this usage so that you understand the default behavior.

Simple VM Creation in PowerShell

We’ll start with a very basic VM, using simple but verbose cmdlets.

The above creates a new Generation 2 virtual machine in the host’s default location named “demovm” with 2 gigabytes of startup memory and connects it to the virtual switch named “vSwitch”. It uses static memory because New-VM cannot enable Dynamic Memory. It uses one vCPU because New-VM cannot override that. It does not have a VHDX. We can do that with New-VM, but I have a couple of things to note for that, and I wanted to start easy. Yes, you will have to issue more cmdlets to change the additional items, but you’re already in the right place to do that. No need to switch to another screen.

Before we move on to post-creation modifications, let’s look at uncommon creation options.

Create a Simple VM with a Specific Version in PowerShell

New-VM has one feature that the GUI cannot replicate by any means: it can create a VM with a specific configuration version. Without overriding, you can only create VMs that use the maximum supported version of the host that builds the VM. If you will need to migrate or replicate the VM to a host running an older version, then you must use New-VM and specify a version old enough to run on all necessary hosts.

To create the same diskless VM as above, but that can run on a Windows Server 2012 R2 host:

You can see the possible versions and their compatibility levels with  Get-VMHostSupportedVersion.

Be aware that creating a VM with a lower version may have unintended side effects. For instance, versions prior to 8 don’t support hardware thread counts so they won’t have access to Hyper-Threading when running on a Hyper-V host using the core scheduler. You can see the official matrix of VM features by version on the related Microsoft docs page.

Note: New-VM also exposes the Experimental and Prerelease switches, but these don’t work on regular release versions of Hyper-V. These switches create VMs with versions above the host’s normally supported maximum. Perhaps they function on Insider versions, but I have not tried.

Simple VM Creation with Positional Parameters

When we write scripts, we should always type out the full names of parameters. But, if you’re working interactively, take all the shortcuts you like. Let’s make that same “demovm”, but save ourselves a few keystrokes:

SwitchName is the only non-positional parameter that we used. You can tell from the help listing ( Get-Help New-VM):

Each parameter surrounded by double brackets ( [[ParameterName]]) is positional. As long as you supply its value in the exact order that it appears, you do not need to type its name.

In only 43 characters, we have accomplished the same as all but one of the wizard’s tabs. If you want to make it even shorter, the quote marks around the VM and switch names are only necessary if they contain spaces. And, once the cmdlet completes, we can just keep typing to change anything that New-VM didn’t cover.

Create a Simple VM in a Non-Default Path

We can place the VM in a location other than the default with one simple change, but it has a behavioral side effect. First, the cmdlet:

The Path parameter overrides the placement of the VM from the host defaults. It does not impact the placement of any virtual hard disks.

As for the previously mentioned side effect, compare the value of the Path parameter of a VM created using the default (on my system):

The Path parameter of a VM with the overriden path value:

When you do not specify a Path, the cmdlet will place all of the virtual machine’s files in the relevant subfolders of the host’s default path (Virtual Machines, Snapshots, etc.). When you specify a path, it first creates a subfolder with the name of the VM, then creates all those other subfolders inside. As far as I know, all of the tools exhibit this same behavior (I did not test WAC).

Create a VM with a VHDX, Single Cmdlet

To create the VM with a virtual hard disk in one shot, you must specify both the NewVHDPath and NewVHDSizeBytes parameter. NewVHDPath operates somewhat independently of Path.

Start with the easiest usage:

The above cmdlet does very nearly all the same things like the GUI wizard but in one line. It starts by doing the same things as the first simple cmdlet that I showed you. Then  It also creates a VHDX of the specified name. Since this cmdlet only indicates the file name, the cmdlet creates it in the host’s default virtual hard disk storage location. To finish up, it attaches the new disk to the VM’s first disk boot location (IDE 0:0 or SCSI 0:0, depending on VM Generation).

Create a VM with a VHDX, Override the VHDX Storage Location

Don’t want the VHDX in the default location? Just change NewVHDPath so that it specifies the full path to the desired location:

Create a VM with a VHDX, Override the Entire VM Location

Want to change the location of the entire VM, but don’t want to specify the path twice? Override placement of the VM using Path, but provide only the file name for NewVHDPath:

The above cmdlet creates a “demovm” folder in “C:LocalVMs”. It places the virtual machine’s configuration files in a “Virtual Machines” subfolder and places the new VHDX in a “Virtual Hard Disks” subfolder.

Just as before, you can place the VHDX into an entirely different location just by providing the full path.

Notes on VHDX Creation with New-VM

A few points:

  • You must always supply the VHDX’s complete file name. New-VM will not guess at what you want to call your virtual disk, nor will it auto-append the .vhdx extension.
  • You must always supply a .vhdx extension. New-VM will not create a VHD formatted disk.
  • All the rules about second-hops and delegation apply.
  • Paths operate from the perspective of the Hyper-V host. When running remotely, a path like “C:LocalVMs” means the C: disk on the host, not on your remote system.
  • You cannot specify an existing file. The entire cmdlet will fail if the file already exists (meaning that, if you tell it to create a new disk and it cannot for some reason, then it will not create the VM, either).

As with the wizard, New-VM can create only one VHDX and it will always connect to the primary boot location (IDE controller 0 location 0 for Generation 1, SCSI controller 0 location 0 for Generation 2). You can issue additional PowerShell commands to create and attach more virtual disks. We’ll tackle that after we finish with the New-VM cmdlet.

Create a VM with a VHDX, Single Cmdlet, and Specify Boot Order

We have one more item to control with New-VHD: the boot device. Using the above cmdlets, your newly created VM will try to boot to the network first. If you used one of the variants that create a virtual hard disk, a failed network boot will fall through to the disk.

Let’s create a VM that boots to the virtual CD/DVD drive instead:

You have multiple options for the BootDevice parameter:

  • CD: attach a virtual drive and set it as the primary boot device
  • Floppy: set the virtual floppy drive as the primary boot device; Generation 1 only
  • IDE: set IDE controller 0, location 0 as the primary boot device; Generation 1 only
  • LegacyNetworkAdapter: attach a legacy network adapter and set it as the primary boot device; Generation 1 only
  • NetworkAdapter: set the network adapter as the primary boot device on a Generation 2 machine, attach a legacy network adapter and set it as the primary boot device on a Generation 1 machine
  • VHD: if you created a VHDX with New-VM, then this will set that disk as the primary boot device. Works for both Generation types

The BootDevice parameter does have come with a quirk: if you create a VHD and set the VM to boot from CD using New-VM, it will fail to create the VM. It tries to attach both the new VHD and the new virtual CD/DVD drive to the same location. The entire process fails. You will need to create the VHD with the VM, then attach a virtual CD/DVD drive and modify the boot order or vice versa.

Make Quick Changes to a New VM with PowerShell

You have your new VM, but you’d like to make some quick, basic changes. Set-VM includes all the common settings as well as a few rare options.

Adjust Processor and Memory Assignments

From New-VM, the virtual machine starts off with one virtual CPU and does not use static memory. My preferred new virtual machine, in two lines:

Both New-VM and Set-VM include the MemoryStartupBytes parameter. I used it with Set-VM to make the grouping logical.

Some operating systems do not work with Dynamic Memory, some applications do not work with Dynamic Memory, and some vendors (and even some administrators) just aren’t ready for virtualization. In any of those cases, you can do something like this instead:

Technically, you can leave off the StaticMemory parameter in the preceding sequence. New-VM always creates a VM with static memory. Use it when you do not know the state of the VM.

Control Automatic Start and Stop Actions

When a Hyper-V host starts or shuts down, it needs to do something with its VMs. If it belongs to a cluster, it has an easy choice for highly-available VMs: move them. For non-HA VMs, it needs some direction. By default, new VMs will stay off when the host starts and save when the host shuts down. You can override these behaviors:

You can use these parameters with any other parameter on Set-VM, and you do not need to include all three of them. If you use the Nothing setting for AutomaticStartAction or if you do not specify a value for AutomaticStartDelay, then it uses a value of 0. AutomaticStartDelay uses a time value of seconds.

AutomaticStartAction has these options (use [Tab] to cycle through):

  • Nothing: stay off
  • Start: always start with the host, after AutomaticStartDelay seconds
  • StartIfRunning: start the VM with the host after AutomaticStartDelay seconds, but only if it was running when the host shut down

Note: I am aware of what appears to be a bug in 2019 in which the VM might not start automatically.

AutomaticStopAction has these options (use [Tab] to cycle through):

  • Save: place the VM into a saved state
  • ShutDown: via the Hyper-V integration services/components, instruct the guest OS to shut down. If it does not respond or complete within the timeout period, force the VM off.
  • TurnOff: Hyper-V halts the virtual machine immediately (essentially like pulling the power on a physical system)

If you do not know what to do, take the safe route of Save. Hyper-V will wait for saves to complete.

Determine Checkpoint Behavior

By default, Windows 10 will take a checkpoint every time you turn on a virtual machine. That essentially gives you an Oops! button. Windows Server has that option, but leaves it off by default. Both Windows and Windows Server use the so-called “Production” checkpoint and fall back to “Standard” checkpoints. You can override all this behavior.

Applicable parameters:

  • CheckpointType: indicate which type of checkpoints to create. Use [Tab] to cycle through the possible values:
    • Disabled: the VM cannot have checkpoints. Does not impact backup’s use of checkpoints.
    • Production: uses VSS in the guest to signal VSS-aware applications to flush to disk, then takes a checkpoint of only the VM’s configuration and disks. Active threads and memory contents are not protected. If VSS in the guest does not respond, falls back to a “Standard” checkpoint.
    • ProductionOnly: same as Production, but fails the checkpoint operation instead of falling back to “Standard”
    • Standard: checkpoints the entire VM, including active threads and memory. Unlike a Production checkpoint, applications inside a VM have no way to know that a checkpoint operation took place.
  • SnaphotFileLocation: specifies the location for the configuration files of a virtual machine’s future checkpoints. Does not impact existing checkpoints. Does not affect virtual hard disk files (AVHD/X files are always creating alongside the parent).
  • AutomaticCheckpointsEnabled: Controls whether or not Hyper-V makes a checkpoint at each VM start. $true to enable, $false to disable.

Example:

Honestly, I dislike the names “Production” and “Standard”. I outright object to the way that Hyper-V Manager and Failover Cluster Manager use the term “application-consistent” to describe them. You can read my article about the two types to help you decide what to do.

Control the Automatic Response to Disconnected Storage

In earlier versions of Hyper-V, losing connection to storage meant disaster for the VMs. Hyper-V would wait out the host’s timeout value (sometimes), then kill the VMs. Now, it can pause the virtual machine’s CPU, memory and I/O, then wait a while for storage to reconnect.

The value of AutomaticCriticalErrorActionTimeout is expressed in minutes. By default, Hyper-V will wait 30 minutes.

Alternatively, you can set AutomaticCriticalErrorAction to None and Hyper-V will kill the VM immediately, as it did in previous versions.

Attach Human-Readable Notes to a Virtual Machine

You can create notes for a virtual machine right on its properties.

Jeff Hicks gave this feature a full treatment and extended it.

Advanced VM Creation with PowerShell

To control all of the features of your new VM, you will need to use additional cmdlets. All of the cmdlets demonstrated in this section will follow a VM created with:

Starting from that base allows me to get where I want with the least level of typing and effort.

Prepare the VM to Use Discrete Device Assignment

Hyper-V has some advanced capabilities to pass through host hardware using Discrete Device Assignment (DDA). Set-VM has three parameters that impact DDA:

  • LowMemoryMappedIoSpace
  • HighMemoryMappiedIoSpace
  • GuestControlledCacheTypes

These have little purpose outside of DDA. Didier Van Hoye wrote a great article on DDA that includes practical uses for these parameters.

Specify Processor Settings on a New VM

All of the ways to create a VM result in a single vCPU with default settings. You can make some changes in the GUI, but only PowerShell reaches everything. Use the Set-VMProcessor cmdlet.

Changing a VM’s Virtual CPU Count in Hyper-V

I always use at least 2 vCPU because it allows me to leverage SMT and Windows versions past XP/2003 just seem to respond better. I do not use more than two without a demonstrated need or when I have an under-subscribed host. We have an article that dives much deeper into virtual CPUs on Hyper-V.

Give our new VM a second vCPU:

You cannot change the virtual processor count on a running, saved, or paused VM.

Note: You can also change the vCPU count with Set-VM, shown earlier.

Set Hard Boundaries on a VM’s Access to CPU Resources

To set hard boundaries on the minimum and maximum percentage of host CPU resources the virtual machine can access, use the Reserve and Maximum parameters, respectively. These specify the total percentage of host processor resources to be used which depends on the number of vCPUs assigned. Calculate actual resource reservations/limits like this:

Parameter Value / Number of Host Logical Processors * Number of Assigned Virtual CPUs = Actual Value

So, a VM with 4 vCPUs set with a Reserve value of 25 on a host with 24 logical processors will lock about 4% of the host’s total CPU resources for itself. A VM with 6 vCPUs and a Limit of 75 on a host with 16 logical processors will use no more than about 28% of total processing power. Refer to the previously-linked article for an explanation of these settings.

To set all of these values:

You do not need to specify any of these values. New-VM and all the GUI tools create a VM with a value of 1 for Count, a value of 0 for Reserve and a value of 100 for Maximum. If you do not specify one of these parameters for Set-VMProcessor, it leaves the value alone. So, you can set the processor Count in one iteration, then modify the Reserve at a later time without disturbing the Count, and then the Maximum at some other point without impacting either of the previous settings.

You can change these settings on a VM while it is off, on, or paused, but not while saved.

Prioritize a VM’s Access to CPU Resources

Instead of hard limits, you can prioritize a VM’s CPU access with Set-VMProcessor’s RelativeWeight parameter. As indicated, the settings is relative. Every VM has this setting. If every VM has the same priority value, then no one has priority. VM’s begin life with a default processor weight of 100. The host’s scheduler gives preference to VMs with a higher processor weight.

To set the VM’s vCPU count and relative processor weight:

You do not need to specify both values together; I only included the Count to show you how to modify both at once on a new VM. You can also include the Reserve and Maximum settings if you like.

Enable Auto-Throttle on a VM’s CPU Access

Tinkering with limits and reservations and weights can consume a great deal of administrative effort, especially when you only want to ensure that no VM runs off with your CPU and drags the whole thing down for everyone. Your first, best control on that is the number of vCPU assigned to a VM. But, when you start to work with high densities, that approach does not solve much. So, Microsoft designed Host Resource Protection. This feature does not look at raw CPU utilization so much as it monitors certain activities. If it deems them excessive, it enacts throttling. You get this feature with a single switch:

Microsoft does not fully document what this controls. You will need to test it in your environment to determine its benefits.

You can use the EnableHostResourceProtection parameter by itself or with any of the others.

Set VM Processor Compatibility

Hyper-V uses a CPU model that very nearly exposes the logical processor to VMs as-is. That means that a VM can access all of the advanced instruction sets implemented by a processor. However, Microsoft also designed VMs to move between hosts. Not all CPUs use the same instruction set. So, Microsoft implements a setting that hides all instruction sets except those shared by every supported CPU from a manufacturer. If you plan to Live Migrate a VM between hosts with different CPUs from the same manufacturer, use this cmdlet:

Employ a related parameter if you need to run unsupported versions of Windows (like NT 4.0 or 2000):

This one time, I did not override Count. Older operating systems did not have the best support for multi-processing, and a lot of applications from that era perform worse with multiple processors.

You can specify $false to disable these features. You can only change them while the VM is turned off. As with the preceding demonstrations, you can use these parameters in any combination with the others, or by themselves.

Change a VM’s NUMA Processor Settings

I have not written much about NUMA. Even the poorest NUMA configuration would not hurt more than a few Hyper-V administrators. If you don’t know what NUMA is, don’t worry about it. I am writing these instructions for people that know what NUMA is, need it, and just want to know how to use PowerShell to configure it for a Hyper-V VM.

Set-VMProcessor provides two of the three available NUMA settings. We will revisit the other one in the Set-VMMemory section below. Use Set-VMProcessor to specify the maximum number of virtual CPUs per NUMA node or the maximum number of virtual NUMA nodes this VM sees per socket.

As before, you can use any combination of these parameters with each other and the previously-shown parameters. Unlike before, mistakes here can make things worse without making anything better.

Enable Hyper-V Nested Virtualization

Want to run Hyper-V on Hyper-V? No problem (anymore). Run this after you make your new VM:

Note: Enabling virtualization extensions silently disable Dynamic Memory. Only Startup memory will apply.

I have not tested this setting with other hypervisors. It does pass the enabled virtualization features of your CPU down to the guest, so it might enable others. I also did not test this parameter with any parameter other than Count.

Change Memory Settings on a New VM

New-VM always leaves you with static memory. If you don’t provide a MemoryStartupBytes value, it will use a default of one gigabyte. The GUI wizards can enable Dynamic Memory, but will only set the Startup value. For all other memory settings, you must access the VM’s property sheets or turn to PowerShell. We will make these changes with Set-VMMemory.

Note: You can also change several memory values with Set-VM, shown earlier.

Setting Memory Quantities on a VM

A virtual machine’s memory quantities appear on three parameters:

  • Startup: How much memory the virtual machine will have at boot time. If the VM does not utilize Dynamic Memory, this value persists throughout the VM’s runtime
  • MinimumBytes: The minimum amount of memory that Dynamic Memory can assign to the virtual machine
  • MaximumBytes: The maximum amount of memory that Dynamic Memory can assign to the virtual machine

These values exist on all VMs. Hyper-V only references the latter two if you configure the VM to use Dynamic Memory.

This cmdlet sets the VM to use two gigabytes of memory at the start. It does not impact Dynamic Memory in any way; it leaves all of those settings alone. You can change this value at any time, although some guest operating systems will not reflect the change.

We will include the other two settings in the upcoming Dynamic Memory sections.

Enable Dynamic Memory on a VM

Control whether or not a VM uses Dynamic Memory with the DynamicMemoryEnabled parameter.

You can disable it with $false. The above usage does not modify any of the memory quantities. A new VM defaults to 512MB minimum and 1TB maximum.

You can only make this change while the VM is off.

You can also control the Buffer percentage that Dynamic Memory uses for this VM. The “buffer” refers to a behind-the-scenes memory reservation for memory expansion. Hyper-V sets aside a percentage of the amount of memory currently assigned to the VM for possible expansion. You control that percentage with this parameter.

So, if Hyper-V assigns 2108 megabytes to this VM, it will also have up to 210.8 megabytes of buffered memory. Buffer only sets a maximum; Hyper-V will use less in demanding conditions or if the set size would exceed the maximum assigned value. Hyper-V ignores the Buffer setting when you disable Dynamic Memory on a VM. You can change the buffer size on a running VM.

Dynamic Memory Setting Demonstrations

Let’s combine the above settings into a few demonstrations.

Control a VM’s Memory Allocation Priority

If VMs have more total assigned memory than the Hyper-V host can accommodate, it will boot them by Priority order (higher first). Also, if Dynamic Memory has to choose between VMs, it will work from Priority.

Valid values range from 0 to 100. New VMs default to 50. You can use Priority with any other valid combination of Set-VMMemory. You can change Priority at any time.

Note: The GUI tools call this property Memory weight and show its value as a 9-point slider from Low (0) to High (100).

Change a VM’s NUMA Memory Settings

We covered the processor-related NUMA settings above. Use Set-VMMemory to control the amount of memory in the virtual NUMA nodes presented to this VM:

As with the processor NUMA settings, I only included this to show you how. If you do not understand NUMA and know exactly why you would make this change, do not touch it.

Attach Virtual Disks and CD/DVD Drives to a Virtual Machine

You could use these cmdlets instead of the features of New-VM to attach drives. You can also use them to augment New-VM. Due to some complexities, I prefer the latter.

A Note on Virtual Machine Drive Controllers

On a physical computer, you have to use the physical drive controllers as you find them. If you run out of disk locations, you have to add physical controllers. With Hyper-V, you do not directly manage the controllers. Simply instruct the related cmdlets to attach the drive to a specific controller number and location. As long as the VM does not already have a drive in that location, it will use it.

On a Generation 1 virtual machine, you have two emulated Enhanced Integrated Drive Electronics (EIDE, or just IDE) controllers, numbered 0 and 1. Each has location 0 and location 1 available. That allows a total of four available IDE slots. When you set a Generation 1 VM to boot to IDE or VHD, it will always start with IDE controller 0, position 0. If you set it to boot to CD, it will walk down through 0:0, 0:1, 1:0, and 1:1 to find the first CD drive.

Both Generation types allow up to four synthetic SCSI controllers, numbered 0 through 4. Each controller can have up to 64 locations, numbered 0 through 63.

Unlike a physical system, you will not gain benefits from balancing drives across controllers.

Create a Virtual Hard Disk File to Attach

You can’t attach a disk file that you don’t have. You must know what you will call it, where you want to put it, and how big to make it.

By default, New-VHD creates a dynamically-expanding hard disk. For the handful of cases where fixed makes more sense, override with the Fixed parameter:

By default a dynamically-expanding VHDX uses a 32 megabyte block size. For some file systems, like ext4, that can cause major expansion percentages over very tiny amounts of utilized space. Override the block size to a value as low as 1 megabyte:

You can also use LogicalSectorSizeBytes and PhysicalSectorSizeBytes to override defaults. Hyper-V will detect the underlying physical storage characteristics and choose accordingly, so do not override these values unless you intend to migrate the disk to a system with different values:

Create a Virtual Hard Disk from a Physical Disk

You can instruct Hyper-V to encapsulate the entirety of a physical disk inside a VHDX. First, use Get-Disk to find the disk number. Then use New-VHD to transfer its contents into a VHD:

You can combine this usage with Fixed or BlockSizeBytes (not both). The new VHDX will have a maximum size that matches the source disk.

Create a Child Virtual Hard Disk

In some cases, you might wish to use a differencing disk with a VM, perhaps as its primary disk. This usage allows the VM to operate normally, but prevent it from making changes to the base VHDX file.

You can also specify the Differencing parameter, if you like.

Note: Any change to the base virtual hard disk invalidates all of its children.

Check a VM for Available Virtual Hard Disk and CD/DVD Locations

You do not need to decide in advance where to connect a disk. However, you sometimes want to have precise control. Before using any of the attach cmdlets, consider verifying that it has not already filled the intended location. Get-VMHardDiskDrive and Get-VMDvdDrive will show current attachments.

Attach a Virtual Hard Disk File to a Virtual Machine

You can add a disk very easily with Add-VMHardDiskDrive:

Hyper-V will attach it to the next available location.

You can override to a particular location:

Technically, you can skip the ControllerType parameter; Generation 1 assumes IDE and Generation 2 has no other option.

If you want to attach a disk to another SCSI controller, but it does not have another, then add it first:

Notice that I did not specify a location on Add-VMHardDiskDrive. If you specify a controller but no location, it just uses the next available.

Attach a Virtual DVD Drive to a Virtual Machine

Take special note: this cmdlet applies to a virtual drive, not a virtual disk. Basically, it creates a place to put a CD/DVD image, but does not necessarily involve a disk image. It can do both, as you’ll see.

Add-VMDvdDrive uses all the same parameters as Add-VMHardDiskDrive above. If you do not specify the Path parameter, then the drive remains empty. If you do specify Path, it mounts the image immediately:

All the notes from the beginning about permissions and delegation apply here.

If you have a DVD drive already and just want to change its contents, use Set-VMDvdDrive:

If you have more than one CD/DVD attached, you can use the ControllerTypeControllerNumber, and ControllerLocation parameters to specify.

If you want to empty the drive:

Remove-VMDvdDrive completely removes the drive from the system.

Work with a New Virtual Machine’s Network Adapters

Every usage of New-VM should result in a virtual machine with at least one virtual network adapter. By default, it will not attach it to a virtual switch. You might need to modify a VLAN. If desired, you can change the name of the adapter. You can also add more adapters, if you want.

Attach the Virtual Adapter to a Virtual Switch

You can connect every adapter on a VM to the same switch:

If you want to specify the adapter, you have to work harder. I wrote up a more thorough guide on networking that includes that, and other advanced topics.

Connect the Virtual Adapter to a VLAN

All of the default vNIC creation processes leave the adapter as untagged. To specify a VLAN, use Set-VMNetworkAdapterVLAN:

If you need help selecting a vNIC for this operation, use my complete guide for details. It does not have a great deal of information on other ways to use this cmdlet, such as for trunks, so refer to the official documentation.

Rename the Virtual Adapter

You could differentiate adapters for the previous cmdlets by giving adapters unique names. Otherwise, Hyper-V calls them all “Network Adapter”.

Like the preceding cmdlets, this usage will rename all vNICs on the VM. Use caution. But, if you do this on a system with only one adapter, then add another, you can filter against adapters not name “Adapter 1”, then later use the VMNetworkAdapterName parameter.

Add Another Virtual Adapter

You can use Add-VMNetworkAdapter to add further adapters to the VM:

Even better, you can name it right away:

Don’t forget to connect your new adapter to a virtual switch (you can still use Connect-VMNetworkAdapter, of course):

Add-VMNetworkAdapter has several additional parameters for adapter creation. Set-VMNetworkAdapter has a superset, show I will show them in its context. However, you might find it convenient to use StaticMacAddress when creating the adapter.

Set the MAC Address of a Virtual Adapter

You can set the MAC address to whatever you like:

If you need to override the MAC for spoofing (as in, for a software load-balancer):

Other Virtual Network Adapter Settings

Virtual network adapters have a dizzying array of options. Check the official documentation or Get-Help Set-VMNetworkAdapter to learn about them.

Work with a New Virtual Machine’s Integration Services Settings

None of the VM creation techniques allow you to make changes to the Hyper-V integration services. Few VMs ever need such a change, so including them would amount to a nuisance for most of us. We do sometimes need to change these items, perhaps to disable time synchronization for virtualized domain controllers or to block attempts to signal VSS in Linux guests.

We do not use “Set” cmdlets to control integration services. We have Get-, Enable-, and Disable- for integration services. Every new VM enables all services except “Guest Services”. Ideally, the cmdlets would all have pre-set options for the integration services. Unfortunately, we have to either type them out or pipe them in from Get-VMIntegrationService. You can use it to get a list of the available services. You can then use the selection capabilities of the console to copy and paste the item that you need (draw over the letters to copy, then right-click to paste). You can also use a filter (Where-Object) to pick the one that you need. For now, we will see the simplest choices.

To disable the time synchronization service for a virtual machine:

To enable guest services for a virtual machine:

Most of the integration service names contain spaces. Don’t forget to use single or double quotes around such a name.

Put it All Together: Use PowerShell to Make the Perfect VM

A very common concern: “How can I remember all of this?” Well, you can’t remember it all. I have used PowerShell to control VMs since the unofficial module for 2008. I don’t remember everything. But, you don’t need to try. In the general sense, you always have Get-Help and Get-Command -Module Hyper-V. But, even better, you probably won’t use the full range of capability. Most of us create VMs with a narrow range of variance. I will give you two general tips for making the custom VM process easier.

Use a Text Tool to Save Creation Components

In introductory, training, and tutorial materials, we often make a strong distinction between interactive PowerShell and scripted PowerShell. If you remember what you want, you can type it right in. If you make enough VMs to justify it, you can have a more thorough script that you guide by parameter. But, you can combine the two for a nice middle ground.

First, pick a tool that you like. Visual Studio Code has a lot of features to support PowerShell. Notepad++ provides a fast and convenient scratch location to copy and paste script pieces.

This tip has one central piece: as you come up with cmdlet configurations that you will, or even might, use again, save them. You don’t have to build everything into a full script. Sometimes, you need a toolbox with a handful of single-purpose snippets. More than once in my career, I’ve come up with a clever solution to solve a problem at hand. Later, I tried to recall it from memory, and couldn’t. Save those little things — you never know when you’ll need them.

Use PowerShell’s Pipeline and Variables with Your Components

In all the cmdlets that I showed you above, I spelled out the virtual machine’s name. You could do a lot of text replacement each time you wanted to use them. But, you have a better way. If you’ve run New-VM lately, you probably noticed that it emitted something to the screen:

customize vms using powershell

Instead of just letting all that go to the screen, you can capture it or pass it along to another cmdlet.

Pipeline Demo

Use the pipe character  | to quickly ship output from one cmdlet to another. It works best to make relatively few and simple changes.

The above has three separate commands that all chain from the first. You can copy this into your text manipulation tool and save it. You can then use it as a base pattern. You change the name of the VM and its VHDX in the text tool and then you can create a VM with these settings anytime you like. No need to step through a wizard and then flip a lot of switches after.

Warning: Some people in the PowerShell community develop what I consider an unhealthy obsession with pipelining, or “one liners”. You should know how the pipeline works, especially the movement of objects. But, extensive pipelining becomes impractical quite quickly. Somewhere along the way, it amounts to little more than showing off. Worse, because not every cmdlet outputs the same object, you quickly have to learn a lot of tricks that do nothing except keep the pipeline going. Most egregiously, “one liners” impose severe challenges to legibility and maintainability with no balancing benefit. Use the pipeline to the extent that it makes things easier, but no further.

Variables Demos

You can capture the output of any cmdlet into a variable, then use that output in succeeding lines. It requires more typing than the pipeline, but trades flexibility.

Each of the cmdlets in the above listing has a PassThru parameter, but, except for New-VM, none emits an object that any of the others can use. This script takes much more typing than the pipeline demo, but it does more and breaks each activity out into a single, easy comprehensible line. As with the pipeline version, you can set up each line to follow the pattern that you use most, then change only the name in the first line to suit each new VM. Notice that it automatically gives the VHDX a name that matches the VM, something that we couldn’t do in the pipeline version.

Combining Pipelines and Variables

You can use variables and pipelines together to maximize their capabilities.

With this one, you can implement your unique pattern but place all the changeable items right in the beginning. This sample only sets the VM’s name. If you want to make other pieces easily changeable, just break them out onto separate lines.

Making Your Own Processes

If you will make a single configuration of VM repeatedly, you should create a saved script or an advanced function in your profile. It should have at least one parameter to specify the individual VM name.

But, even though most people won’t create VMs with a particular wide variance of settings, neither will many people create VMs with an overly tight build. Using a script with lots of parameters presents its own challenges. So, instead of a straight-through script, make a collection of copy/pasteable components.

Use something like the following:

Trying to run all of that as-is would cause some problems. Instead, copy out the chunks that you need and paste them as necessary. Add in whatever parts suit your needs.

Be sure to let us know how you super-charged your VM creation routines!


Go to Original Article
Author: Eric Siron

Hyper-V Powering Windows Features

December 2019

Hyper-V is Microsoft’s hardware virtualization technology that initially released with Windows Server 2008 to support server virtualization and has since become a core component of many Microsoft products and features. These features range from enhancing security to empowering developers to enabling the most compatible gaming console. Recent additions to this list include Windows Sandbox, Windows Defender Application Guard, System Guard and Advanced Threat Detection, Hyper-V Isolated-Containers, Windows Hypervisor Platform and Windows Subsystem for Linux 2. Additionally, applications using Hyper-V, such as Kubernetes for Windows and Docker Desktop, are also being introduced and improved.

As the scope of Windows virtualization has expanded to become an integral part of the operating system, many new OS capabilities have taken a dependency on Hyper-V. Consequently, this created compatibility issues with many popular third-party products that provide their own virtualization solutions, forcing users to choose between applications or losing OS functionality. Therefore, Microsoft has partnered extensively with key software vendors such as VMware, VirtualBox, and BlueStacks to provide updated solutions that directly leverage Microsoft virtualization technologies, eliminating the need for customers to make this trade-off.

Windows Sandbox is an isolated, temporary, desktop environment where you can run untrusted software without the fear of lasting impact to your PC.  Any software installed in Windows Sandbox stays only in the sandbox and cannot affect your host. Once Windows Sandbox is closed, the entire state, including files, registry changes and the installed software, are permanently deleted. Windows Sandbox is built using the same technology we developed to securely operate multi-tenant Azure services like Azure Functions and provides integration with Windows 10 and support for UI based applications.

Windows Defender Application Guard (WDAG) is a Windows 10 security feature introduced in the Fall Creators Update (Version 1709 aka RS3) that protects against targeted threats using Microsoft’s Hyper-V virtualization technology. WDAG augments Windows virtualization based security capabilities to prevent zero-day kernel vulnerabilities from compromising the host operating system. WDAG also enables enterprise users of Microsoft Edge and Internet Explorer (IE) protection from zero-day kernel vulnerabilities by isolating a user’s untrusted browser sessions from the host operating system. Security conscious enterprises use WDAG to lock down their enterprise host while allowing their users to browse non-enterprise content.

Application Guard isolates untrusted sites using a new instance of Windows at the hardware layer.

In order to protect critical resources such as the Windows authentication stack, single sign-on tokens, the Windows Hello biometric stack, and the Virtual Trusted Platform Module, a system’s firmware and hardware must be trustworthy. Windows Defender System Guard reorganizes the existing Windows 10 system integrity features under one roof and sets up the next set of investments in Windows security. It’s designed to make these security guarantees:

  • To protect and maintain the integrity of the system as it starts up
  • To validate that system integrity has truly been maintained through local and remote attestation

Detecting and stopping attacks that tamper with kernel-mode agents at the hypervisor level is a critical component of the unified endpoint protection platform in Microsoft Defender Advanced Threat Protection (Microsoft Defender ATP). It’s not without challenges, but the deep integration of Windows Defender Antivirus with hardware-based isolation capabilities allows the detection of artifacts of such attacks.

Hyper-V plays an important role in the container development experience on Windows 10. Since Windows containers require a tight coupling between its OS version and the host that it runs on, Hyper-V is used to encapsulate containers on Windows 10 in a transparent, lightweight virtual machine. Colloquially, we call these “Hyper-V Isolated Containers”. These containers are run in VMs that have been specifically optimized for speed and efficiency when it comes to host resource usage. Hyper-V Isolated Containers most notably allow developers to develop for multiple Linux distros and Windows at the same time and are managed just like any container developer would expect as they integrate with all the same tooling (e.g. Docker).

The Windows Hypervisor Platform (WHP) adds an extended user-mode API for third-party virtualization stacks and applications to create and manage partitions at the hypervisor level, configure memory mappings for the partition, and create and control execution of virtual processors. The primary value here is that third-party virtualization software (such as VMware) can co-exist with Hyper-V and other Hyper-V based features. Virtualization-Based Security (VBS) is a recent technology that has enabled this co-existence.

WHP provides an API similar to that of Linux’s KVM and macOS’s Hypervisor Framework, and is currently leveraged on projects by QEMU and VMware.

This diagram provides a high-level overview of a third-party architecture.

WSL 2 is the newest version of the architecture that powers the Windows Subsystem for Linux to run ELF64 Linux binaries on Windows. Its feature updates include increased file system performance as well as added full system call compatibility. This new architecture changes how these Linux binaries interact with Windows and your computer’s hardware, but still provides the same user experience as in WSL 1 (the current widely available version). The main difference being that WSL 2 uses a new architecture, which is primarily running a true Linux kernel inside a virtual machine. Individual Linux distros can be run either as a WSL 1 distro, or as a WSL 2 distro, can be upgraded or downgraded at any time, and can run WSL 1 and WSL 2 distros side by side.

Kubernetes started officially supporting Windows Server in production with the release of Kubernetes version 1.14 (in March 2019). Windows-based applications constitute a large portion of the workloads in many organizations. Windows containers provide a modern way for these Windows applications to use DevOps processes and cloud native patterns. Kubernetes has become the de facto standard for container orchestration; hence this support enables a vast ecosystem of Windows applications to not only leverage the power of Kubernetes, but also to leverage the robust and growing ecosystem surrounding it. Organizations with investments in both Windows-based applications and Linux-based applications no longer need to look for separate orchestrators to manage their workloads, leading to increased operational efficiencies across their deployments. The engineering that supported this release relied upon open source and community led approaches that originally brought Windows Server containers to Windows Server 2016.

These components and tools have allowed Microsoft’s Hyper-V technology to introduce new ways of enabling customer experiences. Windows Sandbox, Windows Defender Application Guard, System Guard and Advanced Threat Detection, Hyper-V Isolated-Containers, Windows Hypervisor Platform and Windows Subsystem for Linux 2 are all new Hyper-V components that ensure the security and flexibility customers should expect from Windows. The coordination of applications using Hyper-V, such as Kubernetes for Windows and Docker Desktop also represent Microsoft’s dedication to customer needs, which will continue to stand for our main sentiment going forward.

Go to Original Article
Author: nickeaton

Virtualization-Based Security: Enabled by Default

Virtualization-based Security (VBS) uses hardware virtualization features to create and isolate a secure region of memory from the normal operating system. Windows can use this “virtual secure mode” (VSM) to host a number of security solutions, providing them with greatly increased protection from vulnerabilities in the operating system, and preventing the use of malicious exploits which attempt to defeat operating systems protections.

The Microsoft hypervisor creates VSM and enforces restrictions which protect vital operating system resources, provides an isolated execution environment for privileged software and can protect secrets such as authenticated user credentials. With the increased protections offered by VBS, even if malware compromises the operating system kernel, the possible exploits can be greatly limited and contained because the hypervisor can prevent the malware from executing code or accessing secrets.

The Microsoft hypervisor has supported VSM since the earliest versions of Windows 10. However, until recently, Virtualization-based Security has been an optional feature that is most commonly enabled by enterprises. This was great, but the hypervisor development team was not satisfied. We believed that all devices running Windows should have Microsoft’s most advanced and most effective security features enabled by default. In addition to bringing significant security benefits to Windows, achieving default enablement status for the Microsoft hypervisor enables seamless integration of numerous other scenarios leveraging virtualization. Examples include WSL2, Windows Defender Application Guard, Windows Sandbox, Windows Hypervisor Platform support for 3rd party virtualization software, and much more.

With that goal in mind, we have been hard at work over the past several Windows releases optimizing every aspect of VSM. We knew that getting to the point where VBS could be enabled by default would require reducing the performance and power impact of running the Microsoft hypervisor on typical consumer-grade hardware like tablets, laptops and desktop PCs. We had to make the incremental cost of running the hypervisor as close to zero as possible and this was going to require close partnership with the Windows kernel team and our closest silicon partners – Intel, AMD, and ARM (Qualcomm).

Through software innovations like HyperClear and by making significant hypervisor and Windows kernel changes to avoid fragmenting large pages in the second-level address translation table, we were able to dramatically reduce the runtime performance and power impact of hypervisor memory management. We also heavily optimized hot hypervisor codepaths responsible for things like interrupt virtualization – taking advantage of hardware virtualization assists where we found that it was helpful to do so. Last but not least, we further reduced the performance and power impact of a key VSM feature called Hypervisor-Enforced Code Integrity (HVCI) by working with silicon partners to design completely new hardware features including Intel’s Mode-based execute control for EPT (MBEC), AMD’s Guest-mode execute trap for NPT (GMET), and ARM’s Translation table stage 2 Unprivileged Execute-never (TTS2UXN).

I’m proud to say that as of Windows 10 version 1903 9D, we have succeeded in enabling Virtualization-based Security by default on some capable hardware!

The Samsung Galaxy Book2 is officially the first Windows PC to have VBS enabled by default. This PC is built around the Qualcomm Snapdragon 850 processor, a 64-bit ARM processor. This is particularly exciting for the Microsoft hypervisor development team because it also marks the first time that enabling our hypervisor is officially supported on any ARM-based device.

Keep an eye on this blog for announcements regarding the default-enablement of VBS on additional hardware and in future versions of Windows 10.

Go to Original Article
Author: brucesherwin

VMware Workstation and Hyper-V – Working Together

‎08-27-2019 04:51 PM

Yesterday VMware demonstrated a pre-release version of VMware Workstation with early support for the Windows Hypervisor Platform in the What’s New in VMware Fusion and VMware Workstation session at VMworld.

In Windows 10 we have introduced many security features that utilize the Windows Hypervisor.  Credential Guard, Windows Defender Application Guard, and Virtualization Based Security all utilize the Windows Hypervisor.  At the same time, new Developer features like Windows Server Containers and the WSL 2 both utilize the Windows Hypervisor.

This has made it challenging for our customers who need to use VMware Workstation.  Historically, it has not be possible to run VMware Workstation when Hyper-V was enabled.

In the future – users will be able to run all of these applications together.  This means that users of VMware workstation will be able to take advantage of all the security enhancements and developer features that are available in Windows 10.  Microsoft and VMware have been collaborating on this effort, and I am really excited to be a part of this moment!

Cheers,
Ben

Go to Original Article
Author: Ben Armstrong

5/14: Hyper-V HyperClear Update

‎05-14-2019 12:54 PM

Four new speculative execution side channel vulnerabilities were announced today and affect a wide array of Intel processors. The list of affected processors includes Intel Xeon, Intel Core, and Intel Atom models. These vulnerabilities are referred to as CVE-2018-12126 Microarchitectural Store Buffer Data Sampling (MSBDS), CVE-2018-12130 Microarchitectural Fill Buffer Data Sampling (MFBDS), CVE-2018-12127 Microarchitectural Load Port Data Sampling (MLPDS), and CVE-2018-11091 Microarchitectural Data Sampling Uncacheable Memory (MDSUM). These vulnerabilities are like other Intel CPU vulnerabilities disclosed recently in that they can be leveraged for attacks across isolation boundaries. This includes intra-OS attacks as well as inter-VM attacks.

In a previous blog post, the Hyper-V hypervisor engineering team described our high-performing and comprehensive side channel vulnerability mitigation architecture, HyperClear. We originally designed HyperClear as a defense against the L1 Terminal Fault (a.k.a. Foreshadow) Intel side channel vulnerability. Fortunately for us and for our customers, HyperClear has proven to be an excellent foundation for mitigating this new set of side channel vulnerabilities. In fact, HyperClear required a relatively small set of updates to provide strong inter-VM and intra-OS protections for our customers. These updates have been deployed to Azure and are available in Windows Server 2016 and later supported releases of Windows and Windows Server. Just as before, the HyperClear mitigation allows for safe use of hyper-threading in a multi-tenant virtual machine hosting environment.

We have already shared the technical details of HyperClear and the set of required changes to mitigate this new set of hardware vulnerabilities with industry partners. However, we know that many of our customers are also interested to know how we’ve extended the Hyper-V HyperClear architecture to provide protections against these vulnerabilities.

As we described in the original HyperClear blog post, HyperClear relies on 3 main components to ensure strong inter-VM isolation:

  1. Core Scheduler
  2. Virtual-Processor Address Space Isolation
  3. Sensitive Data Scrubbing

As we extended HyperClear to mitigate these new vulnerabilities, the fundamental components of the architecture remained constant. However, there were two primary hypervisor changes required:

  1. Support for a new Intel processor feature called MbClear. Intel has been working to add support for MbClear by updating the CPU microcode for affected Intel hardware. The Hyper-V hypervisor uses this new feature to clear microarchitectural buffers when switching between virtual processors that belong to different virtual machines. This ensures that when a new virtual processor begins to execute, there is no data remaining in any microarchitectural buffers that belongs to a previously running virtual processor. Additionally, this new processor feature may be exposed to guest operating systems to implement intra-OS mitigations.
  2. Always-enabled sensitive data scrubbing. This ensures that the hypervisor never leaves sensitive data in hypervisor-owned memory when it returns to guest kernel-mode or guest user-mode. This prevents the hypervisor from being used as a gadget by guest user-mode. Without always-enabled sensitive data scrubbing, the concern would be that guest user-mode can deliberately trigger hypervisor entry and that the CPU may speculatively fill a microarchitectural buffer with secrets remaining in memory from a previous hypervisor entry triggered by guest kernel-mode or a different guest user-mode application. Always-enabled sensitive data scrubbing fully mitigates this concern. As a bonus, this change improves performance on many Intel processors because it enables the Hyper-V hypervisor to more efficiently mitigate other previously disclosed Intel side channel speculation vulnerabilities.

Overall, the Hyper-V HyperClear architecture has proven to be a readily extensible design providing strong isolation boundaries against a variety of speculative execution side channel attacks with negligible impact on performance.

Go to Original Article
Author: brucesherwin

How to Use Azure Arc for Hybrid Cloud Management and Security

Azure Arc is a new hybrid cloud management option announced by Microsoft in November of 2019. This article serves as a single point of reference for all things Azure Arc.

According to Microsoft’s CEO Satya Nadella, “Azure Arc really marks the beginning of this new era of hybrid computing where there is a control plane built for multi-cloud, multi-edge” (Microsoft Ignite 2019 Keynote at 14:40). That is a strong statement from one of the industry leaders in cloud computing, especially since hybrid cloud computing has already been around for a decade. Essentially Azure Arc allows organizations to use Azure’s management technologies (“control plane”) to centrally administer public cloud resources along with on-premises servers, virtual machines, and containers. Since Microsoft Azure already manages distributed resources at scale, Microsoft is empowering its users to utilize these same features for all of their hardware, including edge servers. All of Azure’s AI, automation, compliance and security best practices are now available to manage all of their distributed cloud resources, and their underlying infrastructure, which is known as “connected machines.” Additionally, several of Azure’s AI and data services can now be deployed on-premises and centrally managed through Azure Arc, enhancing local and offline management and offering greater data sovereignty. Again, this article will provide an overview of the Azure Arc technology and its key capabilities (currently in Public Preview) and will be updated over time.

Video Preview of Azure Arc

Contents

Getting Started with Azure Arc

Azure Services with Azure Arc

Azure Artificial Intelligence (AI) with Azure Arc

Azure Automation with Azure Arc

Azure Cost Management & Billing with Azure Arc

Azure Data Services with Azure Arc

Cloud Availability with Azure Arc

Azure Availability & Resiliency with Azure Arc

Azure Backup & Restore with Azure Arc

Azure Site Recovery & Geo-Replication with Azure Arc

Cloud Management with Azure Arc

Management Tools with Azure Arc

Managing Legacy Hardware with Azure Arc

Offline Management with Azure Arc

Always Up-To-Date Tools with Azure Arc

Cloud Security & Compliance with Azure Arc

Azure Key Vault with Azure Arc

Azure Monitor with Azure Arc

Azure Policy with Azure Arc

Azure Security Center with Azure Arc

Azure Advanced Threat Protection with Azure Arc

Azure Update Management with Azure Arc

Role-Based Access Control (RBAC) with Azure Arc

DevOps and Application Management with Azure Arc

Azure Kubernetes Service (AKS) & Kubernetes App Management with Azure Arc

Other DevOps Tools with Azure Arc

DevOps On-Premises with Azure Arc

Elastic Scalability & Rapid Deployment with Azure Arc

Hybrid Cloud Integration with Azure Arc

Azure Stack Hub with Azure Arc

Azure Stack Edge with Azure Arc

Azure Stack Hyperconverged Infrastructure (HCI) with Azure Arc

Managed Service Providers (MSPs) with Azure Arc

Azure Lighthouse Integration with Azure Arc

Third-Party Integration with Azure Arc

Amazon Web Services (AWS) Integration with Azure Arc

Google Cloud Platform (GCP) Integration with Azure Arc

IBM Kubernetes Service Integration with Azure Arc

Linux VM Integration with Azure Arc

VMware Cloud Solution Integration with Azure Arc

Getting Started with Azure Arc

The Azure Arc public preview was announced in November 2019 at the Microsoft Ignite conference to much fanfare. In its initial release, many fundamental Azure services are supported along with Azure Data Services. Over time, it is expected that a majority of Azure Services will be supported by Azure Arc.

To get started with Azure Arc, check out the following guides and documentation provided by Microsoft.

Additional information will be added once it is made available.

Azure Services with Azure Arc

One of the fundamental benefits of Azure Arc is the ability to bring Azure services to a customer’s own datacenter. In its initial release, Azure Arc includes services for AI, automation, availability, billing, data, DevOps, Kubernetes management, security, and compliance. Over time, additional Azure services will be available through Azure Arc.

Azure Artificial Intelligence (AI) with Azure Arc

Azure Arc leverages Microsoft Azure’s artificial intelligence (AI) services, to power some of its advanced decision-making abilities learned from managing millions of devices at scale. Since Azure AI is continually monitoring billions of endpoints, it is able to perform tasks that can only be achieved at scale, such as identifying an emerging malware attack. Azure AI improves security, compliance, scalability and more for all cloud resources managed by Azure Arc. The services which run Azure AI are hosted in Microsoft Azure, and in disconnected environments, much of the AI processing can run on local servers using Azure Stack Edge.

For more information about Azure AI visit https://azure.microsoft.com/en-us/overview/ai-platform.

Azure Automation with Azure Arc

Azure Automation is a service provided by Azure that automates repetitive tasks which can be time-consuming or error-prone. This saves the organization significant time and money while helping them maintain operational consistency. Custom automation scripts can get triggered by a schedule or an event to automate servicing, track changes, collect inventory and much more. Since Azure Automation uses PowerShell, Python, and graphical runbooks, it can manage diverse software and hardware that supports PowerShell or has APIs. With Azure Arc, any on-premises connected machines and the applications they host can be integrated and automated with any Azure Automation workflow. These workflows can also be run locally on disconnected machines.

For more information about Azure Automation visit https://azure.microsoft.com/en-in/services/automation.

Azure Cost Management & Billing with Azure Arc

Microsoft Azure and other cloud providers use a consumption-based billing model so that tenants only pay for the resources which they consume. Azure Cost Management and Billing provides granular information to understand how cloud storage, network, memory, CPUs and any Azure services are being used. Organizations can set thresholds and get alerts when any consumer or business unit approaches or exceeds their limits. With Azure Arc, organizations can use cloud billing to optimize and manage costs for their on-premises resources also. In addition to Microsoft Azure and Microsoft hybrid cloud workloads, all Amazon AWS spending can be integrated into the same dashboard.

For more information about Azure Cost Management and Billing visit https://azure.microsoft.com/en-us/services/cost-management.

Azure Data Services with Azure Arc

Azure Data Services is the first major service provided by Azure Arc for on-premises servers. This was the top request of many organizations which want the management capabilities of Microsoft Azure, yet need to keep their data on-premises for data sovereignty. This makes Azure Data Services accessible to companies that must keep their customer’s data onsite, such as those working within regulated industries or those which do not have an Azure datacenter within their country.

In the initial release, both Azure SQL Database and Azure Database for PostgreSQL Hyperscale will be available for on-premises deployments. Now organizations can run and offer database as a service (DBaaS) as a platform as a service (PaaS) offering to their tenants. This makes it easier for users to deploy and manage cloud databases on their own infrastructure, without the overhead of setting up and maintaining the infrastructure on a physical server or virtual machine. The Azure Data Services on Azure Arc still require an underlying Kubernetes cluster, but many management frameworks are supported by Microsoft Azure and Azure Arc.

All of the other Azure Arc benefits are included with the data services, such as automation, backup, monitoring, scaling, security, patching and cost management. Additionally, Azure Data Services can run on both connected and disconnected machines. The latest features and updates to the data services are automatically pushed down from Microsoft Azure to Azure Arc members so that the infrastructure is always current and consistent.

For more information about Azure Data Services with Azure Arc visit https://azure.microsoft.com/en-us/services/azure-arc/hybrid-data-services.

Cloud Availability with Azure Arc

One of the main advantages offered by Microsoft Azure is access to its unlimited hardware spread across multiple datacenters which provide business continuity. This gives Azure customers numerous ways to increase service availability, retain more backups, and gain disaster recovery capabilities. With the introduction of Azure Arc, these features provide even greater integration between on-premises servers and Microsoft Azure.

Azure Availability & Resiliency with Azure Arc

With Azure Arc, organizations can leverage Azure’s availability and resiliency features for their on-premises servers. Virtual Machine Scale Sets allow automatic application scaling by rapidly deploying dozens (or thousands) of VMs to quickly increase the processing capabilities of a cloud application. Integrated load-balancing will distribute network traffic, and redundancy is built into the infrastructure to eliminate single points of failure. VM Availability Sets give administrators the ability to select a group of related VMs and force them to distribute themselves across different physical servers. This is recommended for redundant servers or guest clusters where it is important to have each virtualized instanced spread out so that the loss of a single host will not take down an entire service. Azure Availability Zones extend this concept across multiple datacenters by letting organizations deploy datacenter-wide protection schemes that distribute applications and their data across multiple sites. Azure’s automated updating solutions are availability-aware so they will keep services online during a patching cycle, serially updating and rebooting a subset of hosts. Azure Arc helps hybrid cloud services take advantage of all of the Azure resiliency features.

For more information about Azure availability and resiliency visit https://azure.microsoft.com/en-us/features/resiliency.

Azure Backup & Restore with Azure Arc

Many organizations limit their backup plans because of their storage constraints since it can be costly to store large amounts of data which may not need to be accessed again. Azure Backup helps organizations by allowing their data to be backed up and stored on Microsoft Azure. This usually reduces costs as users are only paying for the storage capacity they are using. Additionally storing backups offsite helps minimize data loss as offsite backups provide resiliency to site-wide outages and can protect customers from ransomware. Azure Backup also offers compression, encryption and retention policies to help organizations in regulated industries. Azure Arc manages the backups and recovery of on-premises servers with Microsoft Azure, with the backups being stored in the customer’s own datacenter or in Microsoft Azure.

For more information about Azure Backup visit https://azure.microsoft.com/en-us/services/backup.

Azure Site Recovery & Geo-Replication with Azure Arc

One of the more popular hybrid cloud features enabled with Microsoft Azure is the ability to replicate data from an on-premises location to Microsoft Azure using Azure Site Recovery (ASR). This allows users to have a disaster recovery site without needing to have a second datacenter. ASR is easy to deploy, configure, operate and even tests disaster recovery plans. Using Azure Arc it is possible to set up geo-replication to move data and services from a managed datacenter running Windows Server Hyper-V, VMware vCenter or Amazon Web Services (AWS) public cloud. Destination datacenters can include other datacenters managed by Azure Arc, Microsoft Azure and Amazon AWS.

For more information about Azure Site Recovery visit https://azure.microsoft.com/en-us/services/site-recovery.

Cloud Management with Azure Arc

Azure Arc introduces some on-premises management benefits which were previously available only in Microsoft Azure. These help organizations administer legacy hardware and disconnected machines with Azure-consistent features using multiple management tools.

Management Tools with Azure Arc

One of the fundamental design concepts of Microsoft Azure is to have centralized management layers (“planes”) that support diverse hardware, data, and administrative tools. The fabric plane controls the hardware through a standard set of interfaces and APIs. The data plane allows unified management of structured and unstructured data. And the control plane offers centralized management through various interfaces, including the GUI-based Azure Portal, Azure PowerShell, and other APIs. Each of these layers interfaces with each other through a standard set of controls, so that the operational steps will be identical whether a user deploys a VM via the Azure Portal or via Azure PowerShell. Azure Arc can manage cloud resources with the following Azure Developer Tools:

At the time of this writing, the documentation for Azure Arc is not yet available, but some examples can be found in the quick start guides which are linked in the Getting Started with Azure Arc section.

Managing Legacy Hardware with Azure Arc

Azure Arc is hardware-agnostic, allowing Azure to manage a customer’s diverse or legacy hardware just like an Azure datacenter server. The hardware must meet certain requirements so that a virtualized Kubernetes cluster can be deployed on it, as Azure services run on this virtualized infrastructure. In the Public Preview, servers must be running Windows Server 2012 R2 (or newer) or Ubuntu 16.04 or 18.04. Over time, additional servers will be supported, with rumors of 32-bit (x86), Oracle and Linux hosts being supported as infrastructure servers.

Offline Management with Azure Arc

Azure Arc will even be able to manage servers that are not regularly connected to the Internet, as is common with the military, emergency services, and sea vessels. Azure Arc has a concept of “connected” and “disconnected” machines. Connected servers have an Azure Resource ID and are part of an Azure Resource group. If a server does not sync with Microsoft Azure every 5 minutes, it is considered disconnected yet it can continue to run its local resources. Microsoft Arc allows these organizations to use the latest Azure services when they are connected, yet still use many of these features if the servers do not maintain an active connection, including Azure Data Services. Even some services which run Azure AI and are hosted in Microsoft Azure can work disconnected environments while running on Azure Stack Edge.

Always Up-To-Date Tools with Azure Arc

One of the advantages of using Microsoft Azure is that all the services are kept current by Microsoft. The latest features, best practices, and AI learning are automatically available to all users in real-time as soon as they are released. When an admin logs into the Azure Portal through a web browser, they are immediately exposed to the latest technology to manage their distributed infrastructure. By ensuring that all users have the same management interface and APIs, Microsoft can guarantee consistency of behavior for all users across all hardware, including on-premises infrastructure when using Azure Arc. However, if the hardware is in a disconnected environment (such as on a sea vessel), there could be some configuration drift as older versions of Azure data services and Azure management tools may still be used until they are reconnected and synced.

Cloud Security & Compliance with Azure Arc

Public cloud services like Microsoft Azure are able to offer industry-leading security and compliance due to their scale and expertise. Microsoft employs more than 3,500 of the world’s leading security engineers who have been collaborating for decades to build the industry’s safest infrastructure. Through its billions of endpoints, Microsoft Azure leverages Azure AI to identify anomalies and detect threats before they become widespread. Azure Arc extends all of the security features offered in Microsoft Azure to on-premises infrastructure, including key vaults, monitoring, policies, security, threat protection, and update management.

Azure Key Vault with Azure Arc

When working in a distributed computing environment, managing credentials, passwords, and user access can become complicated. Azure Key Vault is a service that helps enhance data protection and compliance by securely protecting all keys and monitoring access. Azure Key Vault is supported by Azure Arc, allowing credentials for on-premises services and hybrid clouds to be centrally managed through Azure.

For more information about Azure Key Vault visit https://azure.microsoft.com/en-us/services/key-vault.

Azure Monitor with Azure Arc

Azure Monitor is a service that collects and analyzes telemetry data from Azure infrastructure, networks, and applications. The logs from managed services are sent to Azure Monitor where they are aggregated and analyzed. If a problem is identified, such as an offline server, it can trigger alerts or use Azure Automation to launch recovery workflows. Azure Arc can now monitor on-premises servers, networks, virtualization infrastructure, and applications, just like they were running in Azure. It even leverages Azure AI and Azure Automation to make recommendations and fixes to hybrid cloud infrastructure.

For more information about Azure Monitor visit https://azure.microsoft.com/en-us/services/monitor.

Azure Policy with Azure Arc

Most enterprises have certain compliance requirements for the IT infrastructure, especially those organizations within regulated industries. Azure Policy uses Microsoft Azure to audit an environment and aggregate all the compliance data into a single location. Administrators can get alerted about misconfigurations or configuration drifts and even trigger automated remediation using Azure Automation. Azure Policy can be used with Azure Arc to apply policies on all connect machines, providing the benefits of cloud compliance to on-premises infrastructure.

For more information about Azure Policy visit https://azure.microsoft.com/en-us/services/azure-policy.

Azure Security Center with Azure Arc

The Azure Security Center centralizes all security policies and protects the entire managed environment. When Security Center is enabled, the Azure monitoring agents will report data back from the servers, networks, virtual machines, databases, and applications. The Azure Security Center analytics engines will ingest the data and use AI to provide guidance. It will recommend a broad set of improvements to enhance security, such as closing unnecessary ports or encrypting disks. Perhaps most importantly it will scan all the managed servers and identify updates that are missing, and it can use Azure Automation and Azure Update Management to patch those vulnerabilities. Azure Arc extends these security features to connected machines and services to protect all registered resources.

For more information about Azure Security Center visit https://azure.microsoft.com/en-us/services/security-center

Azure Advanced Threat Protection with Azure Arc

Azure Advanced Threat Protection (ATP) helps the industry’s leading cloud security solution by looking for anomalies and potential attacks with Azure AI. Azure ATP will look for suspicious computer or user activities and report any alerts in real-time. Azure Arc lets organizations extend this cloud protect to their hybrid and on-premises infrastructure offering leading threat protection across all of their cloud resources.

For more information about Azure Advanced Threat Protection visit https://azure.microsoft.com/en-us/features/azure-advanced-threat-protection.

Azure Update Management with Azure Arc

Microsoft Azure automates the process of applying patches, updates and security hotfixes to the cloud resources it manages. With Update Management, a series of updates can be scheduled and deployed on non-compliant servers using Azure Automation. Update management is aware of clusters and availability sets, ensuring that a distributed workload remains online while its infrastructure is patched by live migrating running VMs or containers between hosts. Azure will centrally manage updates, assessment reports, deployment results, and can create alerts for failures or other conditions. Organizations can use Azure Arc to automatically analyze and patch their on-premises and connected servers, virtual machines, and applications.

For more information about Azure Update Management visit https://docs.microsoft.com/en-us/azure/automation/automation-update-management.

Role-Based Access Control (RBAC) with Azure Arc

Controlling access to different resources is a critical function for any organization to enforce security and compliance. Microsoft Azure Active Directory (Azure AD) allows its customers to define granular access control for every user or user role based on different types of permissions (read, modify, delete, copy, sharing, etc.). There are also over 70 user roles provided by Azure, such as a Global Administrator, Virtual Machine Contributor or Billing Administrator. Azure Arc lets businesses extend role-based access control (RBAC) managed by Azure to on-premises environments. This means that any groups, policies, settings, security principals and managed identities that were deployed by Azure AD can now access all managed cloud resources. Azure AD also provides auditing so it is easy to track any changes made by users or security principals across the hybrid cloud.

For more information about Role-Based Access Control visit https://docs.microsoft.com/en-us/azure/role-based-access-control/overview.

DevOps and Application Management with Azure Arc

Over the past few years, containers have become more commonplace as they provide certain advantages over VMs, allowing the virtualized applications and services to be abstracted from their underlying virtualized infrastructure. This means that containerized applications can be uniformly deployed anywhere with any tools so that users do not have to worry about the hardware configuration. This technology has become popular amongst application developers, enabling them to manage their entire application development lifecycle without having a dependency on the IT department to set up the physical or virtual infrastructure. This development methodology is often called DevOps. One of the key design requirements with Azure Arc was to make it hardware agnostic, so with Azure Arc, developers can manage their containerized applications the same way whether they are running in Azure, on-premises or in a hybrid configuration.

Azure Kubernetes Service (AKS) & Kubernetes App Management with Azure Arc

Kubernetes is a management tool that allows developers to deploy, manage and update their containers. Azure Kubernetes Service (AKS) is Microsoft’s Kubernetes service and this can be integrated with Azure Arc. This means that AKS can be used to manage on-premises servers running containers. In additional to Azure Kubernetes Service, Azure Arc can be integrated with other Kubernetes management platforms, including Amazon EKS, Google Kubernetes Engine, and IBM Kubernetes Service.

For more information about Azure Container Services visit https://azure.microsoft.com/en-us/product-categories/containers and for Azure Kubernetes Services (AKS) visit https://azure.microsoft.com/en-us/services/kubernetes-service.

Other DevOps Tools with Azure Arc

For container management on Azure Arc developers can use any of the common Kubernetes management platforms, including Azure Kubernetes Service, Amazon EKS, Google Kubernetes Engine, and IBM Kubernetes Service. All standard deployment and management operations are supported on Azure Arc hardware enabling cross-cloud management.

More information about the non-Azure management tools is provided on the section on Third-Party Management Tools.

DevOps On-Premises with Azure Arc

Many developers prefer to work on their own hardware and some are required to develop applications in a private environment to keep their data secure. Azure Arc allows developers to build and deploy their applications anywhere utilizing Azure’s cloud-based AI, security and other cloud features while retaining their data, IP or other valuable assets within their own private cloud. Additionally, Azure Active Directory can use role-based access control (RBAC) and Azure Policies to manage developer access to sensitive company resources.

Elastic Scalability & Rapid Deployment with Azure Arc

Containerized applications are designed to start quickly when running on a highly-available Kubernetes cluster. The app will bypass the underlying operating system, allowing it to be rapidly deployed and scaled. These applications can quickly grow to an unlimited capacity when deployed on Microsoft Azure. When using Azure Arc, the applications can be managed across public and private clouds. Applications will usually contain several containers types that can be deployed in different locations based on their requirements. A common deployment configuration for a two-tiered application is to deploy the web frontend on Microsoft Azure for scalability and the database in a secure private cloud backend.

Hybrid Cloud Integration with Azure Arc

Microsoft’s hybrid cloud initiatives over the past few years have included certifying on-premises software and hardware configurations known as Azure Stack. Azure Stack allows organizations to run Azure-like services on their own hardware in their datacenter. It allows organizations that may be restricted from using public cloud services to utilize the best parts of Azure within their own datacenter. Azure Stack is most commonly deployed by organizations that have requirements to keep their customer’s datacenter inhouse (or within their territory) for data sovereignty, making it popular for customers who could not adopt the Microsoft Azure public cloud. Azure Arc easily integrates with Azure Stack Hub, Azure Stack Edge, and all the Azure Stack HCI configurations, allowing these services to be managed from Azure.

For more information about Azure Stack visit https://azure.microsoft.com/en-us/overview/azure-stack.

Azure Stack Hub with Azure Arc

Azure Stack Hub (formerly Microsoft Azure Stack) offers organizations a way to run Azure services from their own datacenter, from a service provider’s site, or from within an isolated environment. This cloud platform allows users to deploy Windows VMs, Linux VMs and Kubernetes containers on hardware which they operate. This offering is popular with developers who want to run services locally, organizations which need to retain their customer’s data onsite, and groups which are regularly disconnected from the Internet, as is common with sea vessels or emergency response personnel. Azure Arc allows Azure Stack Hub nodes to run supported Azure services (like Azure Data Services) while being centrally managed and optimized via Azure. These applications can be distributed across public, private or hybrid clouds.

For more information about Azure Stack Hub visit https://docs.microsoft.com/en-us/azure-stack/user/?view=azs-1908.

Azure Stack Edge with Azure Arc

Azure Stack Edge (previously Azure Data Box Edge) is a virtual appliance which can run on any hardware in a datacenter, branch office, remote site or disconnected environment. It is designed to run edge computing workloads on Hyper-V VMs, VMware VMs, containers and Azure services. These edge servers will be optimized run IoT, AI and business workloads so that processing can happen onsite, rather than being sent across a network to a cloud datacenter for processing. When the Azure Stack Edge appliance is (re)connected to the network it transfers any data at high-speed, and data use can be optimized to run during off-hours. It supports machine learning capabilities through GPGA or GPU. Azure Arc can centrally manage Azure Stack Edge, its virtual appliances and physical hardware.

For more information about Azure Stack Edge visit https://azure.microsoft.com/en-us/services/databox/edge.

Azure Stack Hyperconverged Infrastructure (HCI) with Azure Arc

Azure Stack Hyperconverged Infrastructure (HCI) is a program which provides preconfigured hyperconverged hardware from validated OEM partners which are optimized to run Azure Stack. For businesses which want to run Azure-like services on-premises they can purchase or rent hardware which has been standardized to Microsoft’s requirements. VMs, containers, Azure services, AI, IOT and more can run consistency on the Microsoft Azure public cloud or Azure Stack HCI hardware in a datacenter. Cloud services can be distributed across multiple datacenters or clouds and centrally managed using Azure Arc.

For more information about Azure Stack HCI visit https://azure.microsoft.com/en-us/overview/azure-stack/hci.

Managed Service Providers (MSPs) with Azure Arc

Azure Lighthouse Integration with Azure Arc

Azure Lighthouse is a technology designed for managed service providers (MSPs), ISVs or distributed organizations which need to centrally manage their tenants’ resources. Azure Lighthouse allows service providers and tenants to create a two-way trust to allow unified management of cloud resources. Tenants will grant specific permissions for approved user roles on particular cloud resources, so that they can offload the management to their service provider. Now service providers can add their tenants’ private cloud environments under Azure Arc management, so that they can take advantage of the new capabilities which Azure Arc provides.

For more information about Azure Lighthouse visit https://azure.microsoft.com/en-us/services/azure-lighthouse or on the Altaro MSP Dojo.

Third-Party Integration with Azure Arc

Within the Azure management layer (control plane) exists Azure Resource Manager (ARM). ARM provides a way to easily create, manage, monitor and delete any Azure resource. Every native and third-party Azure resource uses ARM to ensure that it can be centrally managed through Azure management tools. Azure Arc now allows non-Azure resources to be managed by Azure. This can include third-party clouds (Amazon Web Services, Google Cloud Platform), Windows and Linux VMs, VMs on non-Microsoft hypervisors (VMware vSphere, Google Compute Engine, Amazon EC2), Kubernetes containers and clusters (including IMB Kubernetes Service, Google Kubernetes Engine and Amazon EKS). At the time of this writing limited information is available about third-party integration, but it will be added over time.

Amazon Web Services (AWS) Integration with Azure Arc

Amazon Web Services (AWS) is Amazon’s public cloud platform. Some services from AWS can be managed by Azure Arc. This includes operating virtual machines running on the Amazon Elastic Compute Cloud (EC2) and containers running on Amazon Elastic Kubernetes Service (EKS). Azure Arc also lets an AWS site be used as a geo-replicated disaster recovery location. AWS billing can also be integrated with Azure Cost Management & Billing so that expenses from both cloud providers can be viewed in a single location.

Additional information will be added once it is made available.

Google Cloud Platform (GCP) Integration with Azure Arc

Google Cloud Platform (GCP) is Google’s public cloud platform. Some services from GCP can be managed by Azure Arc. This includes operating virtual machines running on Google Compute Engine (GCE) and containers running on Google Kubernetes Engine (GKE).

Additional information will be added once it is made available.

IBM Kubernetes Service Integration with Azure Arc

IBM Cloud is IBM’s public cloud platform. Some services from IBM Cloud can be managed by Azure Arc. This includes operating containers running on IBM Kubernetes Service (Kube).

Additional information will be added once it is made available.

Linux VM Integration with Azure Arc

In 2014 Microsoft’s CEO Satya Nadella declared, “Microsoft loves Linux”. Since then the company has embraced Linux integration, making Linux a first-class citizen in its ecosystem. Microsoft even contributes code to the Linux kernel so that it operates efficiently when running as a VM or container on Microsoft’s operating systems. Virtually all management features for Windows VMs are available to supported Linux distributions, and this extends to Azure Arc. Azure Arc admins can use Azure to centrally create, manage and optimize Linux VMs running on-premises, just like any standard Windows VM.

VMware Cloud Solution Integration with Azure Arc

VMware offers a popular virtualization platform and management studio (vSphere) which runs on VMware’s hypervisor. Microsoft has acknowledged that many customers are running legacy on-premises hardware are using VMware, so they provide numerous integration points to Azure and Azure Arc. Organizations can even virtualize and deploy their entire VMware infrastructure on Azure, rather than in their own datacenter. Microsoft makes it easy to deploy, manage, monitor and migrate VMware system and with Azure Arc businesses can now centrally operate their on-premises VMware infrastructure too. While the full management functionality of VMware vSphere is not available through Azure Arc, most standard operations are supported.

For more information about VMware Management with Azure visit https://azure.microsoft.com/en-us/overview/azure-vmware.


Go to Original Article
Author: Symon Perriman

How to Resize Virtual Hard Disks in Hyper-V

We get lots of cool tricks with virtualization. Among them is the ability to change our minds about almost any provisioning decision. In this article, we’re going to examine Hyper-V’s ability to resize virtual hard disks. Both Hyper-V Server (2016+) and Client Hyper-V (Windows 10) have this capability.

An Overview of Hyper-V Disk Resizing

Hyper-V uses two different formats for virtual hard disk files: the original VHD and the newer VHDX. 2016 added a brokered form of VHDX called a “VHD Set”, which follows the same resize rules as VHDX. We can grow both the VHD and VHDX types easily. We can shrink VHDX files with only a bit of work. No supported way exists to shrink a VHD. Once upon a time, a tool was floating around the Internet that would do it. As far as I know, all links to it have gone stale.

You can resize any of Hyper-V’s three layout types (fixed, dynamically expanding, and differencing). However, you cannot resize an AVHDX file (a differencing disk automatically created by the checkpoint function).

Resizing a virtual disk file only changes the file. It does not impact its contents. The files, partitions, formatting — all of that remains the same. A VHD/X resize operation does not stand alone. You will need to perform additional steps for the contents.

Requirements for VHD/VHDX Disk Resizing

The shrink operation must occur on a system with Hyper-V installed. The tools rely on a service that only exists with Hyper-V.

If no virtual machine owns the virtual disk, then you can operate on it directly without any additional steps. Be aware that if a

If a virtual hard disk belongs to a virtual machine, the rules change a bit:

  • If the virtual machine is Off, any of its disks can be resized as though no one owned them
  • If the virtual machine is Saved or has checkpoints, none of its disks can be resized
  • If the virtual machine is Running, then there are additional restrictions for resizing its virtual hard disks

Special Requirements for Shrinking VHDX

Growing a VHDX doesn’t require any changes inside the VHDX. Shrinking needs a bit more. Sometimes, quite a bit more. The resize directions that I show in this article will grow or shrink a virtual disk file, but you have to prepare the contents before a shrink operation. We have another article that goes into detail on this subject.

Can I Resize a Hyper-V Virtual Machine’s Virtual Hard Disks Online?

A very important question: do you need to turn off a Hyper-V virtual machine to resize its virtual hard disks? The answer: sometimes.

  • If the virtual disk in question is the VHD type, then no, it cannot be resized online.
  • If the VM attached the disk in question to its virtual IDE chain, then no, you cannot resize the virtual disk while the virtual machine is online.
  • If the VM attached the disk in question to its virtual SCSI chain, then yes, you can resize the virtual disk while the virtual machine is online.

Resize a Hyper-V Virtual Machine's Virtual Hard Disks Online

Does Online VHDX Resize Work with Generation 1 Hyper-V VMs?

The generation of the virtual machine does not matter for virtual hard disk resizing. If the virtual disk is on the virtual SCSI chain, then you can resize it online.

Does Hyper-V Virtual Disk Resize Work with Linux Virtual Machines?

The guest operating system and file system do not matter. Different guest operating systems might react differently to a resize event, and the steps that you take for the guest’s file system will vary. However, the act of resizing the virtual disk does not change.

Do I Need to Connect the Virtual Disk to a Virtual Machine to Resize It?

Most guides show you how to use a virtual machine’s property sheet to resize a virtual hard disk. That might lead to the impression that you can only resize a virtual hard disk while a virtual machine owns it. Fortunately, you can easily resize a disconnected virtual disk. Both PowerShell and the GUI provide suitable methods.

How to Resize a Virtual Hard Disk with PowerShell

PowerShell is the preferred method for all virtual hard disk resize operations. It’s universal, flexible, scriptable, and, once you get the hang of it, much faster than the GUI.

The cmdlet to use is Resize-VHD:

The VHDX that I used in the sample began life at 20GB. Therefore, the above cmdlet will work as long as I did at least one of the following:

  • Left it unconnected
  • Connected it to a VM’s virtual SCSI controller
  • Turned the connected VM off

Notice the gb suffix on the SizeBytes parameter. PowerShell natively provides that feature; the cmdlet itself has nothing to do with it. PowerShell will automatically translate suffixes as necessary. Be aware that 1kb equals 1,024, not 1,000 (although both b and B both mean “byte”).

Had I used a number for SizeBytes smaller than the current size of the virtual hard disk file, I might have had some trouble. Each VHDX has a specific minimum size dictated by the contents of the file. See the discussion on shrinking at the end of this article for more information. Quickly speaking, the output of Get-VHD includes a MinimumSize field that shows how far you shrink the disk without taking additional actions.

This cmdlet only affects the virtual hard disk’s size. It does not affect the contained file system(s). We will cover that part in an upcoming section.

How to Resize a Disconnected Virtual Hard Disk with Hyper-V Manager

Hyper-V Manager allows you to resize a virtual hard disk whether or not a virtual machine owns it.

  1. From the main screen of Hyper-V Manager, first, select a host in the left pane. All VHD/X actions are carried out by the hypervisor’s subsystems, even if the target virtual hard disk does not belong to a specific virtual machine. Ensure that you pick a host that can reach the VHD/X. If the file resides on SMB storage, delegation may be necessary.
  2. In the far right Actions pane, click Edit Disk.
    Resize a Disconnected Virtual Hard Disk with Hyper-V Manager
  3. The first page is information. Click Next.
  4. Browse to (or type) the location of the disk to edit.
    locate virtual hard disk
  5. The directions from this point are the same as for a connected disk, so go to the next section and pick up at step 6.

Note: Even though these directions specify disconnected virtual hard disks, they can be used on connected virtual disks. All of the rules mentioned earlier apply.

Altaro Dojo Forums
forums logo

Connect with fellow IT pros and master Hyper-V

Moderated by Microsoft MVPs

How to Resize a Virtual Machine’s Virtual Hard Disk with Hyper-V Manager

Hyper-V Manager can also resize virtual hard disks that are attached to virtual machines.

  1. If the virtual hard disk is attached to the VM’s virtual IDE controller, turn off the virtual machine. If the VM is saved, start it. If the VM has checkpoints, remove them.
  2. Open the virtual machine’s Settings dialog.
  3. In the left pane, choose the virtual disk to resize.
  4. In the right pane, click the Edit button in the Media block.
    Resize a Virtual Machine's Virtual Hard Disk with Hyper-V Manager
  5. The wizard will start by displaying the location of the virtual hard disk file, but the page will be grayed out. Otherwise, it will look just like the screenshot from step 4 of the preceding section. Click Next.
  6. Choose to Expand or Shrink the virtual hard disk. Shrink only appears for VHDXs or VHDSs, and only if they have unallocated space at the end of the file. If the VM is off, you will see additional options. Choose the desired operation and click Next.
    edit virtual hard disk wizard
  7. If you chose Expand, it will show you the current size and give you a New Size field to fill in. It will display the maximum possible size for this VHD/X’s file type. All values are in GB, so you can only change in GB increments (use PowerShell if that’s not acceptable).
    expand virtual hard diskIf you chose Shrink (VHDX only), it will show you the current size and give you a New Size field to fill in. It will display the minimum possible size for this file, based on the contents. All values are in GB, so you can only change in GB increments (use PowerShell if that’s not acceptable).
    shrink virtual hard disk
  8. Enter the desired size and click Next.
  9. The wizard will show a summary screen. Review it to ensure accuracy. Click Finish when ready.

The wizard will show a progress bar. That might happen so briefly that you don’t see it, or it may take some time. The variance will depend on what you selected and the speed of your hardware. Growing fixed disks will take some time; shrinking disks usually happens almost instantaneously. Assuming that all is well, you’ll be quietly returned to the screen that you started on.

This change only affects the virtual hard disk’s size. It does not affect the contained file system(s). We will cover that in the next sections.

Following Up After a Virtual Hard Disk Resize Operation

When you grow a virtual hard disk, only the disk’s parameters change. Nothing happens to the file system(s) inside the VHD/X. For a growth operation, you’ll need to perform some additional action. For a Windows guest, that typically means using Disk Management to extend a partition:

After a Virtual Hard Disk Resize Operation

Note: You might need to use the Rescan Disks operation on the Action menu to see the added space.

Of course, you could also create a new partition (or partitions) if you prefer.

Linux distributions have a wide variety of file systems with their own requirements for partitions and sizing. They also have a plenitude of tools to perform the necessary tasks. Perform an Internet search for your distribution and file system.

VHDX Shrink Operations

As previously mentioned, you can’t shrink a VHDX without making changes to the contained file system first. Review our separate article for steps.

What About VHD/VHDX Compact Operations?

I often see confusion between shrinking a VHD/X and compacting a VHD/X. These operations are unrelated. When we talk about resizing, then the proper term for reducing the size of a virtual hard disk is “shrink”. That changes the total allocated space of the contained partitions. “Compact” refers to removing the zeroed blocks of a dynamically expanding VHD/VHDX so that it consumes less space on physical storage. Compact makes no changes to the contained data or partitions. We have an article on compacting VHD/Xs that contain Microsoft file systems and another for compacting VHD/Xs with Linux file systems.

Note: this page was originally published in January 2018 and has been updated to be relevant as of December 2019.


Go to Original Article
Author: Eric Siron

Can Windows Server Standard Really Only Run 2 Hyper-V VMs?

Q. Can Windows Server Standard Edition really only run 2 Hyper-V virtual machines?

A. No. Standard Edition can run just as many virtual machines as Datacenter Edition.

I see and field this particular question quite frequently. A misunderstanding of licensing terminology and a lot of tribal knowledge has created an image of an artificial limitation with standard edition. The two editions have licensing differences. Their Hyper-V related functional differences:

Otherwise, the two editions share functionality.

The True Limitation

The correct statement behind the misconception: a physical host with the minimum Windows Standard Edition license can operate two virtualized instances of Windows Server Standard Edition, as long as the physically-installed instance only operates the virtual machines. That’s a lot to say. But, anything less does not tell the complete story. Despite that, people try anyway. Unfortunately, they shorten it all the way down to, “you can only run two virtual machines,” which is not true.

Virtual Machines Versus Instances

First part: a “virtual machine” and an “operating system instance” are not the same thing. When you use Hyper-V Manager or Failover Cluster Manager or PowerShell to create a new virtual machine, that’s a VM. That empty, non-functional thing that you just built. Hyper-V has a hard limit of 1,024 running virtual machines. I have no idea how many total VMs it will allow. Realistically, you will run out of hardware resources long before you hit any of the stated limits. Up to this point, everything applies equally to Windows Server Standard Edition and Windows Server Datacenter Edition (and Hyper-V Server, as well).

The previous paragraph refers to functional limits. The misstatement that got us here sources from licensing limits. Licenses are legal things. You give money to Microsoft, they allow you to run their product. For this discussion, their operating system products concern us. The licenses in question allow us to run instances of Windows Server. Each distinct, active Windows kernel requires sufficient licensing.

Explaining the “Two”

The “two” is the most truthful part of the misconception. One Windows Server Standard Edition license pack allows for two virtualized instances of Windows Server. You need a certain number of license packs to reach a minimum level (see our eBook on the subject for more information). As a quick synopsis, the minimum license purchase applies to a single host and grants:

  • One physically-installed instance of Windows Server Standard Edition
  • Two virtualized instances of Windows Server Standard Edition

This does not explain everything — only enough to get through this article. Read the linked eBook for more details. Consult your license reseller. Insufficient licensing can cost you a great deal in fines. Take this seriously and talk to trained counsel.

What if I Need More Than Two Virtual Machines on Windows Server Standard Edition?

If you need to run three or more virtual instances of Windows Server, then you buy more licenses for the host. Each time you satisfy the licensing requirements, you have the legal right to run another two Windows Server Standard instances. Due to the per-core licensing model introduced with Windows Server 2016, the minimums vary based on the total number of cores in a system. See the previously-linked eBook for more information.

What About Other Operating Systems?

If you need to run Linux or BSD instances, then you run them (some distributions do have paid licensing requirements; the distribution manufacturer makes the rules). Linux and BSD instances do not count against the Windows Server instances in any way. If you need to run instances of desktop Windows, then need one Windows license per instance at the very leastI do not like to discuss licensing desktop Windows as it has complications and nuances. Definitely consult a licensing expert about those situations. In any case, the two virtualized instances granted by a Windows Server Standard license can only apply to Windows Server Standard.

What About Datacenter Edition?

Mostly, people choose Datacenter Edition for the features. If you need Storage Spaces Direct, then only Datacenter Edition can help you. However, Datacenter Edition allows for an unlimited number of running Windows Server instances. If you run enough on a single host, then the cost for Windows Server Standard eventually meets or exceeds the cost of Datacenter Edition. The exact point depends on the discounts you qualify for. You can expect to break even somewhere around ten to twelve virtual instances.

What About Failover Clustering?

Both Standard and Datacenter Edition can participate as full members in a failover cluster. Each physical host must have sufficient licenses to operate the maximum number of virtual machines it might ever run simultaneously. Consult with your license reseller for more information.

Go to Original Article
Author: Eric Siron