Category Archives: Expert advice on Windows based systems and hardware

Expert advice on Windows based systems and hardware

Windows Server containers and Hyper-V containers explained

A big draw of Windows Server 2016 is the addition of containers that provide similar capabilities to those from…

“;
}
});

/**
* remove unnecessary class from ul
*/
$(“#inlineregform”).find( “ul” ).removeClass(“default-list”);

/**
* Replace “errorMessageInput” class with “sign-up-error-msg” class
*/
function renameErrorMsgClass() {
$(“.errorMessageInput”).each(function() {
if ($(this).hasClass(“hidden”)) {
$(this).removeClass(“errorMessageInput hidden”).addClass(“sign-up-error-msg hidden”);
} else {
$(this).removeClass(“errorMessageInput”).addClass(“sign-up-error-msg”);
}
});
}

/**
* when validation function is called, replace “errorMessageInput” with “sign-up-error-msg”
* before return
*/
function validateThis(v, form) {
var validateReturn = urValidation.validate(v, form);
renameErrorMsgClass();
return validateReturn;
}

/**
* DoC pop-up window js – included in moScripts.js which is not included in responsive page
*/
$(“#inlineRegistration”).on(“click”,”a.consentWindow”, function(e) {
window.open(this.href, “Consent”, “width=500,height=600,scrollbars=1”);
e.preventDefault();
});

leading open source providers. This Microsoft platform actually offers two different types of containers: Windows Server containers and Hyper-V containers. Before you decide which option best meets your needs, take a look at these five quick tips so you have a better understanding of container architecture, deployment and performance management.

Windows Server containers vs. Hyper-V containers

Although Windows Server containers and Hyper-V containers do the same thing and are managed the same way, the level of isolation they provide is different. Windows Server containers share the underlying OS kernel, which makes them smaller than VMs because they don’t each need a copy of the OS. Security can be a concern, however, because if one container is compromised, the OS and all of the other containers could be at risk.

Hyper-V containers and their dependencies reside in Hyper-V VMs and provide an additional layer of isolation. For reference, Hyper-V containers and Hyper-V VMs have different use cases. Containers are typically used for microservices and stateless applications because they are deposable by design and, as such, don’t store persistent data. Hyper-V VMs, typically equipped with virtual hard disks, are better suited to mission-critical applications.

The role of Docker on Windows Server

One key advantage of Docker on Windows is support for container image automation.

In order to package, deliver and manage Windows container images, you need to download and install Docker on Windows Server 2016. Docker Swarm, supported by Windows Server, provides orchestration features that help with cluster creation and workload scheduling. After you install Docker, you’ll need to configure it for Windows, a process that includes selecting secured connections and setting disk paths.

One key advantage of Docker on Windows is support for container image automation. You can use container images for continuous integration cycles because they’re stored as code and can be quickly recreated when need be. You can also download and install a module to extend PowerShell to manage Docker Engine; just make sure you have the latest versions of both Windows and PowerShell before you do so.

Meet Hyper-V container requirements

If you prefer to use Hyper-V containers, make sure you have Server Core or Windows Server 2016 installed, along with the Hyper-V role. There is also a list of minimum resource requirements necessary to run Hyper-V containers. First, you need at least 4 GB of memory for the host VM. You also need a processor with Intel VT-x and at least two virtual processors for the host VM. Unfortunately, nested virtualization doesn’t support Advanced Micro Devices yet.

Although these requirements might not seem extensive, it’s important to carefully consider resource allocation and the workloads you intend to run on Hyper-V containers before deployment. When it comes to container images, you have two different options: a Windows Server Core image and a Nano Server image.

OS components affect both container types

Portability is a key advantage of containers. Because an application and all its dependencies are packaged within the container, it should be easy to deploy on other platforms. Unfortunately, there are different elements that can negatively affect this deployment flexibility. While containers share the underlying OS kernel, they do contain their own OS components, also known as the OS layer. If these components don’t match up with the OS kernel running on the host, the container will most likely be blocked.

The four-level version notation system Microsoft uses includes the major, minor, build and revision levels. Before Windows Server containers or Hyper-V containers will run on a Windows Server host, the major, minor and build levels must match, at minimum. The containers will still start if the revision level doesn’t match, but they might not work properly.

Antimalware tools and container performance

Because of shared components, like those of the OS layer, antimalware tools can affect container performance. The components or layers are shared through the use of placeholders; when those placeholders are read, the reads are redirected to the underlying component. If the container modifies a component, the container replaces the placeholder with the modified one.

Antimalware tools aren’t aware of the redirection and don’t know which components are placeholders and which components are modified, so the same components end up being scanned multiple times. Fortunately, there is a way to make antimalware tools aware of this activity. You can modify the container volume by attaching a parameter to the Create CallbackData flag and checking the Exchange Control Panel (ECP) redirection flags. ECP will then either indicate that the file was opened from a remote layer or that the file was opened from a local layer.

Next Steps

Ensure container isolation and prevent root access

Combine microservices and containers

Maintain high availability with containers and data mirroring

Microsoft cumulative updates bring security, frustration

A year ago, on October Patch Tuesday, Microsoft upended its customers’ monthly security routine when it aligned all supported operating systems to a cumulative updates model — and admins finally have begun to find their footing.

In the old format, administrators could prioritize Microsoft’s critical updates and deploy those as soon as possible. However, this “Swiss cheese” approach — a term used by Windows Server principal program manager Jeff Woolsey at the company’s Ignite 2017 conference — meant admins could pick and choose which vulnerabilities to address. The end result was some systems did not get updates they needed.

Microsoft expanded the cumulative updates model beyond Windows 10 in October 2016 to limit administrators to an all-or-nothing choice. Rather than select which patches to deploy first, the rollup model makes admins determine which systems get patching priority.

“Things have stabilized, and Microsoft has probably achieved their goal at this point of simplifying the process,” said Todd Schell, product manager at Ivanti, an IT security company in South Jordan, Utah. “With these cumulative updates, you don’t have to worry about testing all these individual updates.”

While the blanket approach of the Microsoft cumulative updates model secures systems against all vulnerabilities, admins need a larger test environment and must spend more time to vet every update before deployment.

“Definitely, I think there’s frustration,” Schell said. “This is only one of their jobs for a lot of these people, so the time being tied up adds to the frustration factor.”

Most businesses have adapted to the Microsoft cumulative updates model, which has led to a faster update deployment rate, Schell said. And, as a result, Windows systems are more secure than a year ago, Woolsey reported at Ignite.

“You can’t miss one patch now and say, ‘Whoops, we only deployed 10 of the 11 patches,'” said Jimmy Graham, director of product management for Qualys Inc., based in Redwood City, Calif. “It’s a lot easier to get more updates deployed.”

Watch out for the Search vulnerability

On the anniversary of Microsoft’s cumulative updates model, this year’s October Patch Tuesday includes updates for 62 vulnerabilities, 30 of which affect Windows systems.

While the blanket approach of the Microsoft cumulative updates model secures systems against all vulnerabilities, admins need a larger test environment and must spend more time to vet every update before deployment.

Graham said the most important item for Windows Server administrators is CVE-2017-11771, a critical vulnerability that affects Windows Server 2008 and up. This remote code execution exploit lets an unauthenticated intruder use a memory-handling flaw in the Windows Search service to overtake a machine.

CVE-2017-11771 is similar to vulnerabilities Microsoft patched in June, July and August that closed flaws in the Server Message Block protocol. While CVE-2017-11771 is SMB-related, it is not similar to the exploits used in the WannaCry attacks in spring 2017.

“It could be that [Microsoft] is looking at anything related to SMB,” Schell said.

Microsoft also released two updates on October Patch Tuesday that address similar critical vulnerabilities in Windows Server 2008 and up. CVE-2017-11762 and CVE-2017-11763 are remote code execution vulnerabilities in the Windows font library. On an unpatched system, an attacker gains access via a web-based attack or with a malicious file on a server that a user opens.

“It’s one of those backdoors that you don’t think about too often,” Schell said.

Microsoft also flagged CVE-2017-11779 as critical, a remote code execution vulnerability in Windows Domain Name System that affects Windows Server 2012 and up. To capitalize on the exploit, the attacker sends corrupted DNS responses to a system from a malicious DNS server.

In addition, Microsoft closed a zero-day vulnerability in Microsoft Office in CVE-2017-11826. An attacker inserts malicious code in an Office document that, once opened, hands over control of the system.

For more information about the remaining security bulletins for October Patch Tuesday, visit Microsoft’s Security Update Guide.

Dan Cagen is the associate site editor for SearchWindowsServer.com. Write to him at dcagen@techtarget.com.

Configuration Manager tool regulates server updates to stop attacks

Business workers face a persistent wave of online threats — from malicious hacking techniques to ransomware –…

“;
}
});

/**
* remove unnecessary class from ul
*/
$(“#inlineregform”).find( “ul” ).removeClass(“default-list”);

/**
* Replace “errorMessageInput” class with “sign-up-error-msg” class
*/
function renameErrorMsgClass() {
$(“.errorMessageInput”).each(function() {
if ($(this).hasClass(“hidden”)) {
$(this).removeClass(“errorMessageInput hidden”).addClass(“sign-up-error-msg hidden”);
} else {
$(this).removeClass(“errorMessageInput”).addClass(“sign-up-error-msg”);
}
});
}

/**
* when validation function is called, replace “errorMessageInput” with “sign-up-error-msg”
* before return
*/
function validateThis(v, form) {
var validateReturn = urValidation.validate(v, form);
renameErrorMsgClass();
return validateReturn;
}

/**
* DoC pop-up window js – included in moScripts.js which is not included in responsive page
*/
$(“#inlineRegistration”).on(“click”,”a.consentWindow”, function(e) {
window.open(this.href, “Consent”, “width=500,height=600,scrollbars=1”);
e.preventDefault();
});

and it’s up to the administrator to lock down Microsoft systems and protect the company.

Administrators who apply Microsoft’s security updates in a timely fashion thwart many attacks effectively. IT departments use both System Center Configuration Manager and Windows Server Update Services to roll out patches, but the Configuration Manager tool’s scheduling and deployment options make it the preferred utility for this task. Admins gain control and automation over software updates to all managed systems with the Configuration Manager tool, which also monitors compliance and reporting.

Why we wait to update

An organization bases its security update deployment timeline on several factors, including internal policies, strategies, staff and skill sets. Some businesses roll patches out to production servers as soon as Microsoft makes them available on Patch Tuesday, the second Tuesday each month. Other companies wait a week or even a couple months to do the same, due to stringent testing procedures.

Here’s one example of a deployment timeline:

  • Week 1: Handful of test systems (pilot)
  • Week 2: Larger pool of test systems
  • Week 3: Small pool of production servers
  • Week 4: Larger pool of production servers
  • Week 5: All systems

This scenario leaves many endpoints unpatched and vulnerable to security risks for several weeks. Microsoft has a cumulative update model for all supported Windows OSes; the company packages each month’s patches and supersedes the previous month’s release. In some cases, systems won’t be fully patched — or will remain unpatched — if a business fails to deploy the previous month’s security fixes before Microsoft releases the new updates. To avoid this situation, IT organizations should roll out the current month’s updates before the next Patch Tuesday arrives just a few weeks later.

Automatic deployment rule organizes the patch process

An automatic deployment rule (ADR) in the Configuration Manager tool coordinates the patch rollout process. An ADR provides settings to download updates, package them into software update groups, create deployments of the updates for a collection of devices and roll out the updates when it’s most appropriate.

Find the ADR feature in the Configuration Manager tool under the Software Updates menu within the Software Library module. Figure 1 shows its options.

Create a software update group
Figure 1. The automatic deployment rule feature in the Configuration Manager tool builds a deployment package to automate the update procedure.

Settings to configure specific update criteria

The admin sets the ADR options to download and package software updates with the following criteria, which is also shown in Figure 2:

  • released or revised within the last month;
  • only updates that are required by systems evaluated at the last scan;
  • updates that are not superseded; and
  • updates classified as Critical Updates, Security Updates, Feature Packs, Service Packs, Update Rollups or Updates.
Build an automatic deployment rule
Figure 2. The administrator builds the criteria for a software update group in the ADR component.

The property filter — also seen in Figure 2 — packages software updates on a granular scale to best suit the organization’s needs. In the example shown, the admin uses the property filter to only deploy updates released in the last month.

In the evaluation schedule shown in Figure 3, the admin configures an ADR to assess and package software updates at 11 p.m. on the second Tuesday of each month.

ADR custom schedule
Figure 3. The admin builds a schedule to evaluate and package software updates every month at a certain time in the ADR feature of the Configuration Manager tool.

Set a maintenance window to assist users

To patch servers, use maintenance windows, which control the deployment of software updates to clients in a collection at a specific time. This meets the preferences of server owners, who cannot take certain machines down at particular times for a software update and the consequent reboot. In most cases, admins set maintenance windows to run updates overnight to minimize disruption and effects on end users.

Some businesses roll patches out to production servers as soon as Microsoft makes them available on Patch Tuesday, the second Tuesday each month. Other companies wait a week or even a couple months to do the same, due to stringent testing procedures.

Admins can set the deployment schedule in a maintenance window to As soon as possible since the maintenance window controls the actual rollout time. For example, assume the IT staff configured the following maintenance windows for a collection of servers:

  1. Servers-Updates-GroupA: maintenance window from 12 a.m. to 2 a.m.
  2. Servers-Updates-GroupB: maintenance window from 2 a.m. to 4 a.m.
  3. Servers-Updates-GroupC: maintenance window from 4 a.m. to 6 a.m.

If the admin sets these collections to deploy software updates with the As soon as possible flag, the servers download the Microsoft updates when they become available — it could be right in the middle of a busy workday. Instead, the update process waits until 12 a.m. for Servers-Updates-GroupA, 2 a.m. for the next group and so on. Without any deployment schedule, collections install the software updates as soon as possible and reboot if necessary based on the client settings in the Configuration Manager tool.

To create a maintenance window for a collection, click on the starburst icon under the Maintenance Windows tab in the collection properties. Figure 4 shows a maintenance window that runs daily from 2 a.m. to 4 a.m.

Maintenance window schedule
Figure 4. Configure a maintenance window for a collection with a recurring schedule.

In this situation, admins should configure an ADR to deploy updates with the Available flag at a specific date and time, but not make the installation mandatory until later. Users apply patches and reboot the system at their convenience. Always impress upon users why they should implement the updates quickly.

Microsoft refines features to maximize uptime

Microsoft added more flexibility to coordinate maintenance and control server uptime in version 1606 of the Configuration Manager tool. The server group settings feature the following controls:

  • the percentage of machines that update at the same time;
  • the number of the machines that update at the same time;
  • the maintenance sequence; and
  • PowerShell scripts that run before and after deployments.

[embedded content]

How to use System Center Configuration
Manager to plan and execute a patching regimen
for applications and OSes.

A server group uses a lock mechanism to ensure only the machines in the collection execute and complete the update before the process moves to the next set of servers. An admin can release the deployment lock manually if a patch gets stuck before it completes. Microsoft provides more information on updates to server groups.

To develop server group settings, select the All devices are part of the same server group option in the collection properties, and then click on Settings, as seen in Figure 5.

 Set server group configuration
Figure 5. Select the
All devices are part of the same server group option to configure a collection’s server group settings.

Select the preferred option for the group. In Figure 6, the admin sets the maintenance sequence. Finally, click OK, and the server group is ready.

Maintenance sequence
Figure 6. The administrator uses the server group settings to maintain control over uptime and coordinate the maintenance schedule.

For additional guidance on software update best practices, Microsoft offers pointers for the deployment process.

Next Steps

Secret Service: Culture change needed to boost security

Reduce patching headaches with these tools

Find the right patching software

How to bring Azure costs down to earth

The migration of virtual machines to the cloud sounds great — until your IT department is hit with a huge bil…

“;
}
});

/**
* remove unnecessary class from ul
*/
$(“#inlineregform”).find( “ul” ).removeClass(“default-list”);

/**
* Replace “errorMessageInput” class with “sign-up-error-msg” class
*/
function renameErrorMsgClass() {
$(“.errorMessageInput”).each(function() {
if ($(this).hasClass(“hidden”)) {
$(this).removeClass(“errorMessageInput hidden”).addClass(“sign-up-error-msg hidden”);
} else {
$(this).removeClass(“errorMessageInput”).addClass(“sign-up-error-msg”);
}
});
}

/**
* when validation function is called, replace “errorMessageInput” with “sign-up-error-msg”
* before return
*/
function validateThis(v, form) {
var validateReturn = urValidation.validate(v, form);
renameErrorMsgClass();
return validateReturn;
}

/**
* DoC pop-up window js – included in moScripts.js which is not included in responsive page
*/
$(“#inlineRegistration”).on(“click”,”a.consentWindow”, function(e) {
window.open(this.href, “Consent”, “width=500,height=600,scrollbars=1”);
e.preventDefault();
});

l.

For every minute the VM runs and byte it uses, Microsoft adds charges to a monthly tab. How do you manage Azure costs? The formula is relatively simple — admins should understand the approximate price tag before workloads move to Azure and right-size VMs to reduce wasteful expenses.

Find the right Azure region

The first step is to select the proper Azure region. Each region has different resources, capabilities and services; these facets — and its relative location compared to the business — produce the cost per region. Not every region is available — it depends on the organization’s location or subscription. For example, users in the United States cannot use Australian data centers without an Australian billing address.

A move to a less expensive Azure region makes a noticeable difference when it involves several dozen servers. However, a migration to a different Azure region affects the end-user experience with increased latency if applications move further from users and customers. Admins use Microsoft’s Azure latency test site to understand network performance per region.

Don’t make one-size-fits-all VMs

To further reduce Azure costs, align VMs to the proper performance level. For example, differentiate between production and dev/test environments, and build VMs accordingly. Dev/test VMs don’t usually need the production specifications as they rarely require high availability. Reduce the resources — and their associated costs — for dev/test VMs so they get only what they need.

Look at infrastructure as a service (IaaS) servers

In the web-based GUI wizard admins use to create servers, Azure presents the high-performance VMs as the default. Click on “View All” in the top right-hand corner of the dialog to reveal the range of server sizes. A0 is small and costs significantly less than Microsoft’s suggested options, which makes it ideal for experimentation.

Range of server sizes
Figure 1: The A0 server size is the smallest and least expensive option.

A0 is also oversubscribed, which means CPU performance varies based on other workloads in the node. The lower tiers also do not support load balancing and have other limitations, but the VMs in those levels make for ideal inexpensive test machines.

Admins also have a disk choice to limit Azure costs. To build an IaaS VM, there are two options: hard disk drives or solid-state drives (SSDs). Standard disks are good enough for most workloads with speeds up to 500 IOPS, depending on the configuration. If speed is not a concern, avoid the more expensive SSD choices.

Aside from IaaS, there are other options that many users are unaware of or fail to understand.

Implement services as a service

Some administrators new to the cloud see it as pure IaaS where everything needs to run on its own VM. This is an option — but an expensive one.

A move to a less expensive Azure region makes a noticeable difference when it involves several dozen servers.

Instead, think of that SQL Server and all the associated costs for compute, storage and licensing. Why deal with the price and deployment headaches, and instead just use the SQL Server as a service? It’s cheaper — a Standard_B4ms VM (four cores, 16 GB of RAM) with SQL standard costs about $383 a month while an Azure setup for multiple databases costs $224 a month on a standard tier. Plus, SQL as a service saves the administrator from the patch and update process.

Check your company’s security requirements to see if it clears the use of database servers in the cloud. Because these databases are on a shared resource with potentially hundreds of other companies, an exploit or misconfiguration could leak data outside the organization.

Analyze the cost of cloud resources

Admins must understand business requirements and know what costs they bring before a move to the cloud. On-premises compute has inefficiencies and sprawl that add expenses, but the lack of a monthly bill for most environments lets those costs fly under the radar.

By the same token, it’s vital to know the cloud environment’s requirements and the expenses for applications and infrastructure. Use Microsoft’s Azure calculator to work out the potential price tag.

Bundle resources for easier management

Admins should tap into resource groups to further control Azure costs. This feature collects the service resources, such as the VM, database and other assets, into a unit. Once the business no longer needs the service, the admins remove the resource group. This avoids a common housekeeping problem where the IT staff missed an item and the charges for it show up in the next bill.

Efficient code makes a difference

In an on-premises scenario, admins overcome inefficient code with additional resources. In the cloud, where every item has a cost per transaction or per second, better programming lowers expenses.

For example, an inexperienced database programmer who builds an additional temporary database, costs the company more money each time a new one spins up in the cloud. As this inefficient practice multiplies with each deployed instance, so does the cost. A better programmer with a more thorough understanding of SQL avoids this waste and builds code that takes less time to run.

Good programmers require higher salaries, but for a company that uses the cloud to scale out, that expense is worth it. The business saves more in the long run because lower resource utilization — thanks to better code — results in a smaller bill from Microsoft.

Next Steps

How Azure users can avoid higher Oracle licensing bills

Five steps to control cloud storage costs

Azure Stack adopters might need a hand

Hyper-V PowerShell commands for every occasion

You can certainly manage Hyper-V hosts and VMs with Hyper-V Manager or System Center Virtual Machine Manager, but…

“;
}
});

/**
* remove unnecessary class from ul
*/
$(“#inlineregform”).find( “ul” ).removeClass(“default-list”);

/**
* Replace “errorMessageInput” class with “sign-up-error-msg” class
*/
function renameErrorMsgClass() {
$(“.errorMessageInput”).each(function() {
if ($(this).hasClass(“hidden”)) {
$(this).removeClass(“errorMessageInput hidden”).addClass(“sign-up-error-msg hidden”);
} else {
$(this).removeClass(“errorMessageInput”).addClass(“sign-up-error-msg”);
}
});
}

/**
* when validation function is called, replace “errorMessageInput” with “sign-up-error-msg”
* before return
*/
function validateThis(v, form) {
var validateReturn = urValidation.validate(v, form);
renameErrorMsgClass();
return validateReturn;
}

/**
* DoC pop-up window js – included in moScripts.js which is not included in responsive page
*/
$(“#inlineRegistration”).on(“click”,”a.consentWindow”, function(e) {
window.open(this.href, “Consent”, “width=500,height=600,scrollbars=1”);
e.preventDefault();
});

in some cases, it’s easier to use PowerShell. With this scripting language and interactive command line, you can perform a number of actions, from simply importing a VM and performing a health check to more complex tasks, like enabling replication and creating checkpoints. Follow these five expert tips, and you’ll be well on your way to becoming a Hyper-V PowerShell pro.

Import and export Hyper-V VMs

If you need to import and export VMs and you don’t have the Hyper-V role installed, you can install Hyper-V PowerShell modules on a management server. To export a single VM, use the Export-VM command. This command creates a folder on the path specified with three subfolders — snapshots, virtual hard disks and VMs — which contain all of your VM files. You also have the option to export all of the VMs running on a Hyper-V host or to specify a handful of VMs to export by creating a text file with the VM names and executing a short script using that file. To import a single VM, use the Import-VM command. The import process will register the VM with Hyper-V and check for compatibility with the target host. If the VM is already registered, the existing VM with the same globally unique identifier will be deleted, and the VM will be registered again.

Check Hyper-V host and VM health

You can perform a complete health check for Hyper-V hosts and VMs by using PowerShell commands. When it comes to checking the health of your Hyper-V hosts, there are a lot of elements to consider, including the Hyper-V OS and its service pack, memory and CPU usages, Hyper-V uptime and total, used and available memory. If you want to perform a health check for a standalone host, you can use individual Hyper-V PowerShell commands. To perform a health check for a cluster, use Get-ClusterNode to generate a report. When performing a VM health check, consider the following factors: VM state, integration services version, uptime, whether the VM is clustered, virtual processors, memory configuration and dynamic memory status. You can use Get-VM to obtain this information and a script using the command to check the health status of VMs in a cluster.

Enable Hyper-V replication

Hyper-V replication helps keep VM workloads running in the event of an issue at the production site by replicating those workloads to the disaster recovery site and bringing them online there when need be. To configure Hyper-V replication, you need at least two Hyper-V hosts running Windows Server 2012 or later. There are a few steps involved, but it’s a pretty straightforward process. First, you need to run a script on the replica server to configure the Hyper-V replica and enable required firewall rules. Then, execute a script on the primary server to enable replication for a specific VM — we’ll name it SQLVM, in this case. Finally, initiate the replication with Start-VMInitialReplication –VMName SQLVM. After you’ve completed this process, the VM on the replica server will be turned off, while the one on the primary server will continue to provide services.

Create Hyper-V checkpoints

If you’d like to test applications or just play it safe in case a problem arises, enable Hyper-V checkpoints on your VMs so you can roll back changes to a specific point in time.

If you’d like to test applications or just play it safe in case a problem arises, enable Hyper-V checkpoints on your VMs so you can roll back changes to a specific point in time. The option to take point-in-time images is disabled by default, but you can enable it for a single VM with the following Hyper-V PowerShell command: Set-VM. In order to use production checkpoints, you’ll have to also configure the VM to do so. One you enable and configure the VM to use checkpoints, you can use CheckPoint-VM to create a checkpoint, and the entry will include the date and time it was taken. Unfortunately, the above command won’t work on its own to create checkpoints for VMs on remote Hyper-V hosts, but you can use a short script to create a checkpoint in this instance. To restore a checkpoint, simply stop the VM, and then use the Restore-VMSnapshot command.

Use Port ACL rules in Hyper-V

Port Access Control Lists (ACLs) are an easy way to isolate VM traffic from other VMs. To use this feature, you’ll need Windows Server 2012 or later, and your VMs must be connected to a Hyper-V switch. You can create and manage Port ACL rules using just a few Hyper-V PowerShell commands, but you need to gather some information first. Figure out the source of the traffic, the direction of the traffic — inbound, outbound or both — and whether you want to block or allow traffic. Then, you can execute the Add-VMNetworkAdapterACL command with those specific parameters. You can also list all of the Port ACL rules for a VM with the Get-VMNetworkAdapterACL command. To remove a Port ACL rule associated with a VM, use Remove-VMNetworkAdapterACL. As a time-saver, combine the two previous PowerShell cmdlets to remove all of the VM’s Port ACL rules.

Next Steps

Deep dive into Windows PowerShell

Manage cached credentials with PowerShell

Use PowerShell to enable automated software deployment

Windows file server migration tool eases data transfer dread

websites. They’re just objects stored on a file system. So, why is it rarely simple to transfer a bunch of them?

A Windows file server migration should be straightforward. Windows admins have the xcopy disk operating system command, robocopy and the Copy-Item PowerShell cmdlet at their disposal, with a source, destination and even a Recurse parameter to find every item in all subfolders. But unforeseen issues always seem to foul up large file migrations.

IT professionals typically overlook two topics before they perform a large Windows file server migration: Microsoft’s New Technology File System (NTFS)/share permissions and open file handles. A typical scenario illustrates these concepts.

Say you’ve got a 500 GB file server with each employee’s home folder stored on the \FILESRVUsers file share. The IT department plans to map the folder as a network drive, via Group Policy Objects, on every user’s desktop. But when it’s time to move those home folders, things go wrong. It could be that the disk that stores the home folders is direct-attached. In that case, the admins must migrate it to a storage area network or transfer the data to a different logical unit number. All of that important data must move.

In this scenario, data isn’t just cold storage — this data changes every day. It also has a specific permission structure setup: Employees have full rights to their folders, managers have access to their employees’ folders and other miscellaneous NTFS permissions are scattered about. The organization depends on 24/7 availability for this data.

Commercial tools are available to aid in a large Windows server file migration, including Quest’s Secure Copy and Swimage. Microsoft offers the free File Server Migration Toolkit (FSMT), which recreates shares. FSMT is a great alternative to fiddling the switches in robocopy.

Use FSMT for file transfers

FSMT is a Windows feature, so the user installs it via PowerShell on the destination server:

Install-WindowsFeature Migration –ComputerName DESTINATIONSRV

Once FSMT installs, stay on the destination server, and use the SmigDeploy utility to create the deployment shares. The SmigDeploy tool makes the share on the destination server and performs the required setup on the source server. The syntax below assumes that the source server runs Windows Server 2012 and has an AMD64 architecture, while the share to migrate the profiles to is at E:Users.

Microsoft offers the free File Server Migration Toolkit, which recreates shares.

smigdeploy.exe /package /architecture amd64 /os WS12 /path E:Users

Use a similar command if the source server runs an earlier version of Windows Server.

Once this script generates the E:Users folder, create a share for it:

New-SmbShare -Path D:Users -Name Users

Next, copy the deployment folder from the destination server to the source server:

Copy-Item -Path \DESTINATIONSRVUsers -Destination \SOURCESRVc$ -Recurse

Register FSMT on the source server to continue. From the source server, change to the C:UsersSMT_ws12_amd64 folder, and issue the command SmigDeploy.exe to make FSMT ready for use.

To perform the Windows file server migration, go to the destination server, and import the PowerShell snap-in that the feature installed:

Add-Pssnapin microsoft.windows.servermanager.migration

Once the snap-in loads, type Receive-SmigServerData. This sets up the destination server to receive data from the source server once it’s initiated. Go to the source server, and send all of the data to the destination:

Send-SmigServerData -ComputerName DESTINATIONSRV -SourcePath D:Users -DestinationPath C:Users -include all -Recurse

Enter the administrator password if prompted, then watch as the files and folders flow over to the destination server. This FSMT process copies the data and keeps permissions in place during the Windows file server migration.

On Windows, PowerShell vs. Bash comparison gets interesting

With Microsoft ushering Bash onto its systems, it’s time to reevaluate Windows PowerShell vs. Bash.

Microsoft partnered with Linux vendor Canonical to port Bash to Windows in 2016. Bash integration with the Windows environment means that users can forgo dual-booting with Canonical’s Ubuntu OS to get native Linux capabilities. Script-savvy Windows admins might wonder if Bash on Windows replaces PowerShell, which is similar to Unix and Linux systems and also already provides OpenSSH connectivity over the Secure Shell protocol.

Does Linux friendliness at Microsoft tip the scales in the Windows PowerShell vs. Bash debate? “Absolutely not,” said Jeffrey Snover, inventor of PowerShell and a Microsoft Technical Fellow. PowerShell is here to stay, he said, so admins must understand the differences between Microsoft’s native automation scripting platform and Bash.

Windows PowerShell vs. Bash

Purpose and scope. PowerShell is a configuration management tool that brings the capabilities of Linux command-line interface (CLI) control into the historically point-and-click Windows environment to efficiently manage Windows servers in virtual deployments. Administrators can manage Windows server workloads or host production Linux workloads and server applications via PowerShell.

Bash, on the other hand, is more suited for developer environments. It was introduced to complement and strengthen CLI-based interaction. With the addition of Bash to Windows, code that developers or infrastructure engineers wrote for Linux can work on their Windows systems, too. Picture Linux-first tools — Python, Ruby, Git — that are common in DevOps shops running directly on Windows.

Bash for Windows doesn’t mean all Windows. It debuted with the Windows 10 Anniversary Update in two parts: the core subsystem and the package. The core subsystem offers Linux APIs on Windows and is an integral part of the Windows 10 Insider Builds. Canonical offers the optional package that brings CLI tools for Linux systems. While PowerShell runs on any Windows version, you need x64 Windows 10 Anniversary Update Build 14393 or later to install and run Bash.

Syntax. PowerShell is not just a shell; it is a complete scripting environment. PowerShell invokes lightweight commands called cmdlets at runtime via automated scripts or APIs. While PowerShell does not call for them, old disk OS commands still work well. PowerShell uses aliases, which point old commands to the corresponding new ones. The Get-Alias cmdlet gives you a list of all aliases in PowerShell.

Figure 1 shows an example of two commands, Ls from Bash and dir from PowerShell. While they’re two separate CLI concepts, the output is not wildly different.

PowerShell dir vs. Bash Ls
Figure 1. In this comparison of Windows PowerShell vs. Bash, the output for Bash’s
Ls command and PowerShell’s
dir command is similar.

PowerShell relies on an object pipeline. It pipes objects, passing along the output of one cmdlet as the input for another one. The same data can be manipulated with multiple cmdlets in sequence. By piping objects, PowerShell scripts share complex data, passing entire data structures between commands. Bash, on the other hand, passes output and input as plain text, which means it is easy for the user to move information to the next program.

PowerShell directory list
Figure 2. PowerShell output for a directory list shows objects and properties.

Figure 2 shows how a directory list displays in PowerShell. The output is in the form of file objects with properties, such as date created and size, listed beside the files. Bash output, by contrast, is in the form of a set of strings, which are the text representations of file names. The end result is especially important: The scripts you write take the data that is returned and pass it on to another function or use it to perform an action.

Capabilities. PowerShell is a configuration management tool. It enables you to edit the registry, manage Microsoft Azure cloud and Exchange email, or conduct Windows Management Instrumentation. The Bash shell and command language don’t offer these capabilities in Windows. With it as a developer tool on Windows, however, users can code and build functions or services while working on the same files from both the Linux and Windows CLI.

PowerShell makes it easy to access registry values and file properties using a common syntax. Extensible markup language processing is also straightforward.

The value of PowerShell vs. Bash comes down to the user. If you’re working on several Windows systems, Bash is of little use; you’ll need PowerShell to write scripts. Bash doesn’t allow you to access local Linux files with Windows apps. For example, you can’t use Bash to open a Linux file on Windows Notepad. While Bash is great for managing text files in a scripting environment, everything is managed through APIs, not files. So, Bash is really only useful when you want to import Linux code to Windows machines and develop that code.

To manage Windows workloads, PowerShell is effective with its .NET and COM+ integration. There are many object-oriented and modular features installed with PowerShell that extend the functionality of the tool to manage Windows-centric tasks on Active Directory, Exchange Server and SQL Server. VMware is easily managed via PowerShell as well. The CLI has good tracing and debugging functions. Users have multiple integrated development environments to code and test programs.

PowerShell cannot compete with Bash on Linux. Bash boasts an ecosystem of tools built from the ground up with Linux in mind.

Windows PowerShell and Bash differences
This chart shows the differences between Windows PowerShell and Bash.

Admins’ learning curve for PowerShell vs. Bash

PowerShell deals with a lot of scripting. If you’re from a Unix/Linux background, this tool looks familiar. However, for Windows GUI adherents, there’s a steep learning curve. For instance, PowerShell 4.0 runs 299 built-in cmdlets, and each cmdlet has multiple options. To complicate matters, you have to use the correct parameters.

Bash on Windows comes with less than 40 internal functions and around 100 helper programs. With a slimmer syntax, Bash is faster, but PowerShell has the advantage of a consistent syntax structure. If you’re just starting out, it will take some time to thoroughly exploit PowerShell’s reach. Users familiar with the tool deploy, manage and repair hundreds of systems from any remote location, automate runbooks and use C#-based PowerShell script files to automate repetitive tasks.

Use a performance counter to detect Hyper-V bottlenecks

Although Hyper-V Manager doesn’t provide any options for detecting bottlenecks on Hyper-V hosts and VMs, you can…

“;
}
});

/**
* remove unnecessary class from ul
*/
$(“#inlineregform”).find( “ul” ).removeClass(“default-list”);

/**
* Replace “errorMessageInput” class with “sign-up-error-msg” class
*/
function renameErrorMsgClass() {
$(“.errorMessageInput”).each(function() {
if ($(this).hasClass(“hidden”)) {
$(this).removeClass(“errorMessageInput hidden”).addClass(“sign-up-error-msg hidden”);
} else {
$(this).removeClass(“errorMessageInput”).addClass(“sign-up-error-msg”);
}
});
}

/**
* when validation function is called, replace “errorMessageInput” with “sign-up-error-msg”
* before return
*/
function validateThis(v, form) {
var validateReturn = urValidation.validate(v, form);
renameErrorMsgClass();
return validateReturn;
}

/**
* DoC pop-up window js – included in moScripts.js which is not included in responsive page
*/
$(“#inlineRegistration”).on(“click”,”a.consentWindow”, function(e) {
window.open(this.href, “Consent”, “width=500,height=600,scrollbars=1”);
e.preventDefault();
});

use third-party tools to detect issues related to network, storage, CPU and memory. Another option is to use the performance counters that Windows OS ships with to detect bottlenecks on Hyper-V hosts.

There are various performance counters available to find issues on Hyper-V hosts and VMs, depending on the issue you’re facing. For example, if a Hyper-V host isn’t operating normally or takes too much time responding to Hyper-V calls from VMs and remote machines, you might want to use the Hyper-V Hypervisor Logical Processor (_Total)% Total RunTime performance counter to ensure the Hyper-V host has enough processing power available to process requests quickly. If the logical processor runtime count value is above 85%, the Hyper-V host is overloaded and requires immediate attention.

Similarly, if you need to check whether the memory assigned to VMs is sufficient, you can use the MemoryAvailable Mbytes performance counter. If the available memory value shows low consistently, you might want to assign more memory to VMs or increase the maximum memory setting if you’re using dynamic memory.

To detect storage latencies or troubleshoot storage-related issues in Hyper-V, use physical disk performance counters, such as Physical DiskAvg. Disk Sec/Read, Physical DiskAvg. Disk sec/Write, Physical DiskAvg. Disk read queue length and Physical DiskAvg. Disk write queue length. If you find greater storage latencies, you can buy additional or fast storage or move VMs to available storage. You can also enable Storage Quality of Service if the Hyper-V host is running on Windows Server 2012 or later OSes, which allows you to fine-tune storage policies for VMs.

There are various performance counters available to find issues on Hyper-V hosts and VMs, depending on the issue you’re facing.

To detect network bottlenecks, there are two performance counters available: Physical NIC Bytes/Sec, used to detect network performance for the Hyper-V host, and the Hyper-V Virtual Network Adapter Bytes/Sec performance counter, which can be used to see how a VM network is performing.

You can use the above performance counters to detect bottlenecks in various Hyper-V components, which ultimately helps you get to the root cause of the problem.

Next Steps

Use these Hyper-V performance-tuning tips

Improve VM networking performance

Develop a VM load-balancing strategy to avoid mistakes

Dig Deeper on Microsoft Hyper-V management

Have a question for an expert?

Please add a title for your question

Get answers from a TechTarget expert on whatever’s puzzling you.

Microsoft Nano Server overhaul a disappointment to some

As recently as late 2016, Microsoft hailed Nano Server as the heir apparent to Server Core.

Microsoft Nano Server’s inclusion in Windows Server 2016 excited experts, who buzzed about Nano Server’s smaller footprint of 400 MB — quite a reduction from a Server Core install of about 6 GB. Microsoft offered this new install option for specific services in the data center, such as Hyper-V clusters and scale-out file servers. The company said the smaller footprint — mainly achieved by pulling the GUI — would prevent attacks and reduce the number of patches.

However, Microsoft Nano Server lost those touted capabilities with the 1709 release of Windows Server 2016. Microsoft stripped Nano Server’s functionality even further to make it available only as a base image for containers and so that it runs in a container in a container host. As Nano Server shrinks to around 80 MB, Microsoft removed infrastructure features, such as Hyper-V. Nano Server does not include Windows PowerShell, .NET Core or Windows Management Instrumentation by default. It lost the servicing stack to add roles, features and updates in the OS. To patch or update, the container redeploys via Docker, which also handles troubleshooting.

This change represents a failure for Microsoft and its customers.

Early adopters got burned

Microsoft’s message is: Don’t believe what we said about Nano Server — sorry if that’s why you upgraded to Windows Server 2016. Now that Microsoft Nano Server is for containers only, anyone who uses it as a file server host or a Hyper-V host must migrate before support ends in spring 2018.

Microsoft’s message is: Don’t believe what we said about Nano Server — sorry if that’s why you upgraded to Windows Server 2016.

When Microsoft scraps a major feature it promoted during Windows Server 2016’s launch cycle, it means the company thinks the move won’t affect many customers. Most IT pros I talk to who have Windows Server 2016 licenses choose to deploy Windows Server 2012 R2 for now. Proceed with caution before you deploy workloads that use Nano Server or Server Core, because of the murky situation and unclear future for each streamlined OS.

Server Core just isn’t worth it

I don’t particularly care for Server Core as an installation option. It causes administrative and day-to-day headaches that far outweigh perceived benefits, such as reduced attack surface and less patching. The administrative tools aren’t great — try to configure firewall ports properly for remote procedure call and the remote server administration toolkit on the first try — although PowerShell is sufficient now that it’s matured. Plus, Server Core doesn’t save money.

I liked Microsoft Nano Server because it stripped down and refactored Windows Server; it represented Windows for the future. In contrast, Server Core feels like flying with blinders on. Unfortunately, Microsoft did not significantly innovate with Server Core in Windows Server 2016. I recommend Server Core for only sensitive deployments and prefer the convenience of the full GUI for almost every other situation, even if I primarily administer Windows Server with PowerShell.

Containers are overhyped

It’s an odd choice to make Nano Server only available for containers. Midmarket and Fortune 500 companies with thousands of Windows Server licenses will not immediately go all-in on containers. Not every organization can buy into the DevOps continuous integration mindset. I have no doubt that container-like technology will prevail. But Windows Server is a product with broad appeal. Microsoft, rather than make a better non-Server Core infrastructure product, spent its time courting everyone’s Docker obsession.

The original Microsoft Nano Server represented a move forward for the company. The new version is a step in the wrong direction.

Antimalware tools can impair Windows container performance

Antivirus and many antimalware tools operate by scanning files against a database of known threats and often perform…

“;
}
});

/**
* remove unnecessary class from ul
*/
$(“#inlineregform”).find( “ul” ).removeClass(“default-list”);

/**
* Replace “errorMessageInput” class with “sign-up-error-msg” class
*/
function renameErrorMsgClass() {
$(“.errorMessageInput”).each(function() {
if ($(this).hasClass(“hidden”)) {
$(this).removeClass(“errorMessageInput hidden”).addClass(“sign-up-error-msg hidden”);
} else {
$(this).removeClass(“errorMessageInput”).addClass(“sign-up-error-msg”);
}
});
}

/**
* when validation function is called, replace “errorMessageInput” with “sign-up-error-msg”
* before return
*/
function validateThis(v, form) {
var validateReturn = urValidation.validate(v, form);
renameErrorMsgClass();
return validateReturn;
}

/**
* DoC pop-up window js – included in moScripts.js which is not included in responsive page
*/
$(“#inlineRegistration”).on(“click”,”a.consentWindow”, function(e) {
window.open(this.href, “Consent”, “width=500,height=600,scrollbars=1”);
e.preventDefault();
});

additional heuristic analysis of those files for potentially unknown threats. In a typical bare-metal or virtualized system, the process of file scanning can take some time and possibly impact workload performance. But malware scanning can pose an even greater performance impact for a system hosting containers.

The problem is shared components. Containers are built from a series of components or layers, such as the Windows base OS package. Those components or layers are typically shared between containers using placeholders — called reparse points — to compose each isolated container. When placeholders are read, the reads are redirected to the underlying component. If a container modifies a component, the placeholder is replaced with the modified component.

However, most antimalware tools operate above this level and never see the redirection taking place. Therefore, they have no way of knowing which container components are placeholders and which are modified. As a result, a scanning process can wind up rescanning the same underlying components for every container. This can cause a significant amount of redundant scanning on a host system with many containers. The result is reduced container performance because the same components are getting scanned far more often than they need to be.

It might be possible to avoid redundant scanning by helping antimalware tools “see” whether the container components are placeholders or modified — new — elements. Administrators can modify a container volume by attaching a specific extra create parameter to the Create CallbackData flag that receives placeholder information and then checking the ECP redirection flags. If the ECP indicates that a file was opened from a remote or registered layer, antimalware tools can skip the scan. If the ECP indicates that a file was opened from a local package or scratch layer, antimalware tools can scan normally.

Microsoft documentation provides additional details and instructions for this antimalware scanning workaround.

Next Steps

Learn about antimalware protection and endpoint security

Secure each layer of the container stack

Ensure container isolation and prevent root access

Dig Deeper on Application virtualization

Have a question for an expert?

Please add a title for your question

Get answers from a TechTarget expert on whatever’s puzzling you.