Tag Archives: administrators

Explore the Cubic congestion control provider for Windows

Administrators may not be familiar with the Cubic congestion control provider, but Microsoft’s move to make this the default setting in the Windows networking stack means IT will need to learn how it works and how to manage it.

When Microsoft released Windows Server version 1709 in its Semi-Annual Channel, the company introduced a number of features, such as support for data deduplication in the Resilient File System and support for virtual network encryption.

Microsoft also made the Cubic algorithm the default congestion control provider for that version of Windows Server. The most recent preview builds of Windows 10 and Windows Server 2019 (Long-Term Servicing Channel) also enable Cubic by default.

Microsoft added Cubic to Windows Server 2016, as well, but it calls this implementation an experimental feature. Due to this disclaimer, administrators should learn how to manage Cubic if unexpected behavior occurs.

Why Cubic matters in today’s data centers

Congestion control mechanisms improve performance by monitoring packet loss and latency and making adjustments accordingly. TCP/IP limits the size of the congestion window and then gradually increases the window size over time. This process stops when the maximum receive window size is reached or packet loss occurs. However, this method hasn’t aged well with the advent of high-bandwidth networks.

For the last several years, Windows has used Compound TCP as its standard congestion control provider. Compound TCP increases the size of the receive window and the volume of data sent.

Cubic, which has been the default congestion provider for Linux since 2006, is a protocol that improves traffic flow by keeping track of congestion events and dynamically adjusting the congestion window.

A Microsoft blog on the networking features in Windows Server 2019 said Cubic performs better over a high-speed, long-distance network because it accelerates to optimal speed more quickly than Compound TCP.

Enable and disable Cubic with netsh commands

Microsoft added Cubic to later builds of Windows Server 2016. You can use the following PowerShell command to see if Cubic is in your build:

Get-NetTCPSetting| Select-Object SettingName, CcongestionProvider

Technically, Cubic is a TCP/IP add-on. Because PowerShell does not support Cubic yet, admins must enable it in Windows Server 2016 from the command line with the netsh command from an elevated command prompt.

Netsh uses the concepts of contexts and subcontexts to configure many aspects of Windows Server’s networking stack. A context is similar to a mode. For example, the netsh firewall command places netsh in a firewall context, which means that the utility will accept firewall-related commands.

Microsoft added Cubic-related functionality into the netsh interface context. The interface context — abbreviated as INT in some Microsoft documentation — provides commands to manage the TCP/IP protocol.

Prior to Windows Server 2012, admins could make global changes to the TCP/IP stack by referencing the desired setting directly. For example, if an administrator wanted to use the Compound TCP congestion control provider — which was the congestion control provider since Windows Vista and Windows Server 2008 — they could use the following command:

netsh int tcp set global congestionprovider=ctcp

Newer versions of Windows Server use netsh and the interface context, but Microsoft made some syntax changes in Windows Server 2012 that carried over to Windows Server 2016. Rather than setting values directly, Windows Server 2012 and Windows Server 2016 use supplemental templates.

In this example, we enable Cubic in Windows Server 2016:

netsh int tcp set supplemental template=internet congestionprovider=cubic

This command launches netsh, switches to the interface context, loads the Internet CongestionProvider template and sets the congestion control provider to Cubic. Similarly, we can switch from the Cubic provider to the default Compound congestion provider with the following command:

netsh int tcp set supplemental template=internet congestionprovider=compound

Learn the tricks for using Microsoft Teams with Exchange

Using Microsoft Teams means Exchange administrators need to understand how this emerging collaboration service connects to the Exchange Online and Exchange on-premises systems.

At its 2017 Ignite conference, Microsoft unveiled its intelligent communications plan, which mapped out the movement of features from Skype for Business to Microsoft Teams, the Office 365 team collaboration service launched in March 2017. Since that September 2017 conference, Microsoft has added meetings and calling features to Teams, while also enhancing the product’s overall functionality.

Organizations that run Exchange need to understand how Microsoft Teams relies on Office 365 Groups, as well as the setup considerations Exchange administrators need to know.

How Microsoft Teams depends on Office 365 Groups

Each team in Microsoft Teams depends on the functionality provided by Office 365 Groups, such as shared mailboxes or SharePoint Online team sites. An organization can permit all users to create a team and Office 365 Group, or it can limit this ability by group membership. 

When creating a new team, it can be linked to an existing Office 365 Group; otherwise, a new group will be created.

Microsoft Teams layout
Microsoft Teams is Microsoft’s foray into the team collaboration space. Using Microsoft Teams with Exchange will require administrators to stay abreast of roadmap plans for proper configuration and utilization of the collaboration offering.

Microsoft adjusted settings recently so new Office 365 Groups created by Microsoft Teams do not appear in Outlook by default. If administrators want new groups to show in Outlook, they can use the Set-UnifiedGroup PowerShell command.

Microsoft Teams’ reliance on Office 365 Groups affects organizations that run an Exchange hybrid configuration. In this scenario, the Azure AD Connect group writeback feature can be enabled to synchronize Office 365 Groups to Exchange on premises as distribution groups. But this setting could lead to the creation of many Office 365 Groups created via Microsoft Teams that will appear in Exchange on premises. Administrators will need to watch this to see if the configuration will need to be adjusted.

Using Microsoft Teams with Exchange Online vs. Exchange on premises

As an Exchange Online customer, subscribers also get access to all the Microsoft Teams features. However, if the organization uses Exchange on premises, then certain functionality, such as the ability to modify user profile pictures and add connectors, is not available.

Microsoft Teams’ reliance on Office 365 Groups affects organizations that run an Exchange hybrid configuration.

Without connectors, users cannot plug third-party systems into Microsoft Teams; certain add-ins, like the Twitter connector that delivers tweets into a Microsoft Teams channel, cannot be used. Additionally, organizations that use Microsoft Teams with Exchange on-premises mailboxes must run on Exchange 2016 cumulative update 3 or higher to create and view meetings in Microsoft Teams.

Message hygiene services and Microsoft Teams

Antispam technology might need to be adjusted due to some Microsoft Teams and Exchange integration issues.

When a new member joins a team, the email.teams.microsoft.com domain sends an email to the new member. Microsoft owns this domain name, which the tenant administrator cannot adjust.

Because the domain is considered an external email domain to the organization’s Exchange Online deployment, the organization’s antispam configuration in Exchange Online Protection may mark the notification email as spam. Consequently, the new member might not receive the email or may not see it if it goes into the junk email folder.

To prevent this situation, Microsoft recommends adding email.teams.microsoft.com to the allowed domains list in Exchange Online Protection.

Complications with security and compliance tools

Administrators need to understand the security and compliance functionality when using Microsoft Teams with Exchange Online or Exchange on premises. Office 365 copies team channel conversations in the Office 365 Groups shared mailbox in Exchange Online so its security and compliance tools, such as eDiscovery, can examine the content. However, Office 365 stores copies of chat conversations in the users’ Exchange Online mailboxes, not the shared mailbox in Office 365 Groups.

Historically, Office 365 security and compliance tools could not access conversation content in an Exchange on-premises mailbox in a hybrid environment. Microsoft made changes to support this scenario, but customers must request this feature via Microsoft support.

Configure Exchange to send email to Microsoft Teams

An organization might want its users to have the ability to send email messages from Exchange Online or Exchange on premises to channels in Microsoft Teams. To send an email message to a channel, users need the channel’s email address and permission from the administrator. A right-click on a channel reveals the Get email address option. All the channels have a unique email address.

Administrators can restrict the domains permitted to send email to a channel in the Teams administrator settings in the new Microsoft Teams and Skype for Business admin center.

VMware HCX makes hybrid, multi-cloud more attainable

LAS VEGAS — VMware HCX attempts to drive migration to hybrid and multi-cloud architectures, but IT administrators are still hesitant to make the switch due to concerns around cost and complexity.

Before doing product evaluations and determining if VMware Hybrid Cloud Extension (HCX) is a good option for workload migration, admins must figure out if the cloud meets their current and future business needs. What is the organization trying to accomplish with its existing deployments?

For example, consider a near-end-of-support vSphere 5.5 environment: Is the goal to seamlessly migrate those workloads from the current environment to the cloud without an on-premises upgrade? Or, is successfully migrating hundreds of VMs or large amounts of storage the objective?

Determining the ultimate goal and whether a private cloud, hybrid cloud, public cloud or multi-cloud makes the most sense is a decision that admins must make on a case-by-case basis.

Cloud cost and complexity concerns

The ongoing fee associated with using cloud services is just one of the cost concerns, experts said in a session here at VMworld 2018. During the migration, admins have to worry about whether they’ll need to change IPs, the potential of running into compatibility issues, and the responsibility of ensuring business continuity and disaster recovery.

“Even after we meet all their requirements, we’ve seen in any organization all kinds of inertia about getting going,” said Allwyn Sequeira, senior vice president and general manager of hybrid cloud services at VMware. “People think they need to go buy high-bandwidth pipes to connect from on-prem to the cloud. People think they need to do an assessment of applications to see if this is an app that should be moved to the cloud.”

App dependencies and mapping are certainly important issues to consider. With more VMs, the environment is more complex; it’s easier to break something during migration.

Even when a certain vendor or product addresses their concerns, admins need buy-in from networking, security, compliance and governance teams before moving forward with the cloud.

The introduction of VMware HCX is the vendor’s attempt to remove some of the roadblocks keeping organizations from adopting hybrid and multi-cloud environments.

What is VMware HCX, and what are its use cases?

VMware HCX, also known as NSX Hybrid Connect, is a platform that enables admins to migrate VMs and applications between vSphere infrastructures with at least version 5.0 and newer and from on-premises environments to the cloud.

The top use cases of VMware HCX include consolidating and modernizing the data center, extending the data center to the cloud, and disaster recovery.

“HCX gives you freedom of choice,” said Nathan Thaler, director of cloud platforms at MIT in Cambridge, Mass. “You can move your workload into a cloud provider as long it works for you, and then you can move it out without any lock-in. We’ve moved certain VMs between multiple states and without any network downtime.”

Thaler did caution organizations to avoid using virtual hardware beyond the highest level of compatibility with the oldest cloud environment.

Disaster recovery to the cloud, while maybe not as front of mind as other popular use cases, is key in the event of a natural disaster.

“We wanted to be able to have resiliency whether it’s an East Coast event or a West Coast event,” said HCX customer Gary Goforth, senior systems engineer at ConnectWise Inc., a business management software provider based in Tampa, Fla.

VMware HCX-supported features include Encrypted vMotion, vSphere Replication and scheduled migrations. The functionality itself seems to be what admins are really looking for.

“We wanted a fairly simple, easy way to implement a cloud,” Goforth said. “We wanted to do it with minimal to no downtime and to handle a bulk migration of our virtual machines.”

In terms of the VMware HCX roadmap, the vendor is working on constructs to move workloads across different clouds, Sequeira said.

“It’s all about interconnecting data centers to each other,” he said. “Ultimately, at the end of the day, where you run is going to become less important than what services you need.”

Plan your Exchange migration to Office 365 with confidence

Introduction

Choosing an Exchange migration to Office 365 is just the beginning of this process for administrators. Migrating all the content, troubleshooting the issues and then getting the settings just right in a new system can be overwhelming, especially with tricky legacy archives.

Even though it might appear that the Exchange migration to Office 365 is happening everywhere, transitioning to the cloud is not a black and white choice for every organization. On-premises servers still get the job done; however, Exchange Online offers a constant flow of new features and costs less in some cases. Administrators should also consider a hybrid deployment to get the benefits of both platforms.

Once you have determined the right configuration, you will have to choose how to transfer archived emails and public folders and which tools to use. Beyond relocating mailboxes, administrators have to keep content accessible and security a priority during an Exchange migration to Office 365.

This guide simplifies the decision-making process and steers administrators away from common issues. More advanced tutorials share the reasons to keep certain data on premises and the tricks to set up the cloud service for optimal results.

1Before the move

Plan your Exchange migration

Prepare for your move from Exchange Server to the cloud by understanding your deployment options and tools to smooth out any bumps in the road.

2After the move

Working with Exchange Online

After you’ve made the switch to Office 365’s hosted email platform, these tools and practices will have your organization taking advantage of the new platform’s perks without delay.

3Glossary

Definitions related to Exchange Server migration

Understand the terms related to moving Exchange mailboxes.

Kubernetes in Azure eases container deployment duties


With the growing popularity of containers in the enterprise, administrators require assistance to deploy and manage…

“;
}
});

/**
* remove unnecessary class from ul
*/
$(“#inlineregform”).find( “ul” ).removeClass(“default-list”);

/**
* Replace “errorMessageInput” class with “sign-up-error-msg” class
*/
function renameErrorMsgClass() {
$(“.errorMessageInput”).each(function() {
if ($(this).hasClass(“hidden”)) {
$(this).removeClass(“errorMessageInput hidden”).addClass(“sign-up-error-msg hidden”);
} else {
$(this).removeClass(“errorMessageInput”).addClass(“sign-up-error-msg”);
}
});
}

/**
* when validation function is called, replace “errorMessageInput” with “sign-up-error-msg”
* before return
*/
function validateThis(v, form) {
var validateReturn = urValidation.validate(v, form);
renameErrorMsgClass();
return validateReturn;
}

/**
* DoC pop-up window js – included in moScripts.js which is not included in responsive page
*/
$(“#inlineRegistration”).on(“click”,”a.consentWindow”, function(e) {
window.open(this.href, “Consent”, “width=500,height=600,scrollbars=1”);
e.preventDefault();
});

these workloads, particularly in the cloud.

When you consider the growing prevalence of Linux and containers both in Windows Server and in the Azure platform, it makes sense for administrators to get more familiar with how to work with Kubernetes in Azure.

Containers help developers streamline the coding process, while orchestrators give the IT staff a tool to deploy these applications in a cluster. One of the more popular tools, Kubernetes, automates the process of configuring container applications within and on top of Linux across public, private and hybrid clouds.

For companies that prefer to use Azure for container deployments, Microsoft developed the Azure Kubernetes Service (AKS), a hosted control plane, to give administrators an orchestration and cluster management tool for its cloud platform.

Why containers and why Kubernetes?

There are many advantages to containers. Because they share an operating system, containers are lighter than virtual machines (VMs). Patching containers is less onerous than it is for VMs; the administrator just swaps out the base image.

On the development side, containers are more convenient. Containers are not reliant on underlying infrastructure and file systems, so they can move from operating system to operating system without issue.

Kubernetes makes working with containers easier. Most organizations choose containers because they want to virtualize applications and produce them quickly, integrate them with continuous delivery and DevOps style work, and provide them isolation and security from each other.

For many people, Kubernetes represents a container platform where they can run apps, but it can do more than that. Kubernetes is a management environment that handles compute, networking and storage for containers.

Kubernetes acts as much as a PaaS provider as an IaaS, and it also deftly handles moving containers across different platforms. Kubernetes organizes clusters of Linux hosts that run containers, turns them off and on, moves them around hosts, configures them via declarative statements and automates provisioning.

Using Kubernetes in Azure

Clusters are sets of VMs designed to run containerized applications. A cluster holds a master VM and agent nodes or VMs that host the containers.

Microsoft calls AKS self-healing, which means the platform will recover from infrastructure problems automatically.

AKS limits the administrative workload that would be required to run this type of cluster on premises. AKS shares the container workload across the nodes in the cluster and redistributes resources when adding or removing nodes. Azure automatically upgrades and patches AKS.

Microsoft calls AKS self-healing, which means the platform will recover from infrastructure problems automatically. Like other cloud services, Microsoft only charges for the agent pool nodes that run.

Starting up Kubernetes in Azure

The simplest way to provision a new instance of an AKS cluster is to use Azure Cloud Shell, a browser-based command-line environment for working with Azure services and resources.

Azure Cloud Shell works like the Azure CLI, except it’s updated automatically and is available from a web browser. There are many service provider plug-ins enabled by default in the shell.

Azure Cloud Shell session
Starting a PowerShell session in the Azure Cloud Shell

Open Azure Cloud Shell at shell.azure.com. Choose PowerShell and sign in to the account with your Azure subscription. When the session starts, complete the provider registration with these commands:

az provider register -n Microsoft.Network
az provider register -n Microsoft.Storage
az provider register -n Microsoft.Compute
az provider register -n Microsoft.ContainerService

[embedded content]

How to create a Kubernetes cluster on Azure

Next, create a resource group, which will contain the Azure resources in the AKS cluster.

az group create –name AKSCluster –location centralus

Use the following command to create a cluster named AKSCluster1 that will live in the AKSCluster resource group with two associated nodes:

az aks create –resource-group AKSCluster –name AKSCluster1 –node-count 2 –generate-ssh-keys

Next, to use the Kubernetes command-line tool kubectl to control the cluster, get the necessary credentials:

az aks get-credentials –resource-group AKSCluster –name AKSCluster1

Next, use kubectl to list your nodes:

kubectl get nodes

Put the cluster into production with a manifest file

After setting up the cluster, load the applications. You’ll need a manifest file that dictates the cluster’s runtime configuration, the containers to run on the cluster and the services to use.

Developers can create this manifest file along with the appropriate container images and provide them to your operations team, who will import them into Kubernetes or clone them from GitHub and point the kubectl utility to the relevant manifest.

To get more familiar with Kubernetes in Azure, Microsoft offers a tutorial to build a web app that lets people vote for either cats or dogs. The app runs on a couple of container images with a front-end service.

IDC, Cisco survey assesses future IT staffing needs

Network engineers, architects and administrators will be among the most critical job positions to fill if enterprises are to meet their digital transformation goals, according to an IDC survey tracking future IT staffing trends.

 The survey, sponsored by Cisco, zeroed in on the top 10 technology trends shaping IT hiring and 20 specific roles IT professionals should consider in terms of expanding their skills and training. IDC surveyed global IT hiring managers and examined an estimated 2 million IT job postings to assess current and future IT staffing needs.

The survey results showed digital transformation is increasing demand for skills in a number of key technology areas, driven by the growing number of network-connected devices, the adoption of cloud services and the rise in security threats.

Intersections provide hot jobs

IDC classified the intersections of where hot technologies and jobs meet as “significant IT opportunities” for current and future IT staffing, said Mark Leary, directing analyst at Cisco Services.

“From computing and networking resources to systems software resources, lots of the hot jobs function at these intersections and take advantage of automation, AI and machine learning.” Rather than eliminating IT staff jobs, a lot of jobs take advantage of those same technologies, he added.

Organizations are preparing for future IT staffing by filling vacant IT positions from within rather than hiring from outside the company, then sending staff to training, if needed, according to the survey.

But technology workers still should investigate where the biggest challenges exist and determine where they may be most valued, Leary said.

“Quite frankly, IT people have to have greater understanding of the business processes and of the innovation that’s going on within the lines of business and have much more of a customer focus.”

The internet of things illustrates the complexity of emerging digital systems. Any IoT implementation requires from 10 to 12 major technologies to come together successfully, and the IT organization is seen as the place where that expertise lies, Leary said.

IDC’s research found organizations place a high value on training and certifications. IDC found that 70% of IT leaders believe certifications are an indicator of a candidate’s qualifications and 82% of digital transformation executives believe certifications speed innovation and new ways to support the business.

Network influences future IT staffing

IDC’s results also reflect the changes going on within enterprise networking.

Digital transformation is raising the bar on networking staffs, specifically because it requires enterprises to focus on newer technologies, Leary said. The point of developing skills in network programming, for example, is to work with the capabilities of automation tools so they can access analytics and big data.

This isn’t something that’s evolutionary; it’s revolutionary.
Mark Learydirecting analyst, Cisco Services

In 2015, only one in 15 Cisco-certified workers viewed network programming as critical to his or her job. By 2017, the percentage rose to one in four. “This isn’t something that’s evolutionary; it’s revolutionary,” Leary said.

While the traditional measure of success was to make sure the network was up and running with 99.999% availability, that goal is being replaced by network readiness, Leary said. “Now you need to know if your network is ready to absorb new applications or that new video stream or those new customers we just let on the network.”

Leary is involved with making sure Cisco training and certifications are relevant and matched to jobs and organizational needs, he said. “We’ve been through a series of enhancements for the network programmability training we offer, and we continually add things to it,” he added. Cisco also monitors customers to make sure they’re learning about the right technologies and tools rather than just deploying technologies faster.

To meet the new networking demands, Cisco is changing its CCNA, CCNP and CCIE certifications in two different ways, Leary said. “We’ve developed a lot of new content that focuses on cybersecurity, network programming, cloud interactions and such because the person who is working in networking is doing that,” he said. The other emphasis is to make sure networking staff understands language of other groups like software developers.

Meltdown and Spectre bugs dominate January Patch Tuesday

Administrators have their work cut out for them on multiple fronts after a serious security flaw surfaced that affects most operating systems and devices.

The Meltdown and Spectre vulnerabilities encompass most modern CPUs — from Intel-based server systems to ARM processors in mobile phones — that could allow an attacker to pull sensitive data from memory. Microsoft mitigated the flaws with several out-of-band patches last week, which have been folded into the January Patch Tuesday cumulative updates. Full protection from the exploits will require a more concerted effort from administrators, however.

Researchers only recently discovered the flaws that have existed for approximately 20 years. The Meltdown (CVE-2017-5754) and Spectre (CVE-2017-5753 and CVE-2017-5715) exploits target the CPU’s pre-fetch functionality that anticipates the feature or code the user might use, which puts relevant data and instructions into memory. A CPU exploit written in JavaScript from a malicious website could pull sensitive information from the memory of an unpatched system.

“You could leak cookies, session keys, credentials — information like that,” said Jimmy Graham, director of product management for Qualys Inc., based in Redwood City, Calif.

In other January Patch Tuesday releases, Microsoft updated the Edge and Internet Explorer browsers to reduce the threat from Meltdown and Spectre attacks. Aside from these CPU-related fixes, Microsoft issued patches for 56 other vulnerabilities with 16 rated as critical, including a zero-day exploit in Microsoft Office (CVE-2018-0802).

Microsoft’s attempt to address the CPU exploits had an adverse effect on some AMD systems, which could not boot after IT applied the patches. This issue prompted the company to pull those fixes until it produces a more reliable update.

Most major cloud providers claim they have closed this security gap, but administrators of on-premises systems will have to complete several deployment stages to fully protect their systems.

“This is a nasty one,” said Harjit Dhaliwal, a senior systems administrator in the higher education sector who handles patching for his environment. “This is not one of your normal vulnerabilities where you just have a patch and you’re done. Fixing this involves a Microsoft patch, registry entries and firmware updates.”

Administrators must ensure they have updated their anti-virus product so  it has the proper registry setting otherwise they cannot apply the Meltdown and Spectre patches. Windows Server systems require a separate registry change to enable the protections from Microsoft’s Meltdown and Spectre patches. The IT staff must identify the devices under their purview and collect that information to gather any firmware updates from the vendor. Firmware updates will correct two exploits related to Spectre. Microsoft plugged the Meltdown vulnerability with code changes to the kernel.

Dhaliwal manages approximately 5,000 Windows systems, ranging from laptops to Windows Server systems, with some models several years old. He is exploring a way to automate the firmware collection and deployment process, but certain security restrictions make this task even more challenging. His organization requires BitLocker on all systems, which must be disabled to apply a firmware update, otherwise he could run into encryption key problems.

“This is not going to be an overnight process,” Dhaliwal said.

How expansive is Meltdown and Spectre?

Attacks that use the Meltdown and Spectre exploit a bug with how many CPUs execute address space layout randomization. The difference between the two vulnerabilities is the kind of memory that is presented to the attacker. Exploits that use the flaws can expose data that resides in the system’s memory, such as login information from a password manager.

Microsoft noted Meltdown and Spectre exist in many processors — Intel, AMD and ARM — and other operating systems, including Google Android and Chrome, and Apple iOS and macOS.  Apple reportedly has closed the vulnerabilities in its mobile phones, while the status of Android patching varies depending on the OEM. Meltdown only affects Intel processors, and the Spectre exploit works with processors from Intel, AMD and ARM, according to researchers.

Virtualized workloads may require fine-tuning

Some administrators have confirmed early reports that the Meltdown and Spectre patches from Microsoft affect system performance.

 Dave Kawula, principal consultant at TriCon Elite Consulting, applied the updates to his Windows Server 2016 setup and ran the VM Fleet utility, which runs a stress test with virtualized workloads on Hyper-V and the Storage Spaces Direct pooled storage feature. The results were troubling, with preliminary tests showing a performance loss of about 35%, Kawula said.

 “As it stands, this is going to be a huge issue,” he said. “Administrators better rethink all their virtualization farms, because Meltdown and Spectre are throwing a wrench into all of our designs.”

Intel has been updating its BIOS code since the exploits were made public, and the company will likely refine its firmware to reduce the impact from the fix, Graham said.

For more information about the remaining security bulletins for January Patch Tuesday, visit Microsoft’s Security Update Guide.

Tom Walat is the site editor for SearchWindowsServer. Write to him at twalat@techtarget.com or follow him @TomWalatTT on Twitter.

How does Data Protection Manager 2016 save and restore data?

on the DPM server. But administrators have flexibility to put those backups on storage that is located — and partitioned — elsewhere.

To get started, IT administrators install a DPM agent on every computer to protect, then add that machine to a protection group in DPM. A protection group is a collection of computers that all share the same protection settings or configurations, such as the group name, protection policy, disk target and replica method.

After the agent installation and configuration process, DPM produces a replica for every protection group member, which can include volumes, shares, folders, Exchange storage groups and SQL Server databases. System Center Data Protection Manager 2016 builds replicas in a provisioned storage pool.

After DPM generates the initial replicas, its agents track changes to the protected data and send that information to the DPM server. DPM will then use the change journal to update the file data replicas at the intervals specified by the configuration. During synchronization, any changes are sent to the DPM server, which applies them to the replica.

DPM also periodically checks the replica for consistency with block-level verification and corrects any problems in the replica. Administrators can set recovery points for a protection group member to create multiple recoverable versions for each backup.

Application data backups require additional planning

Application data protection can vary based on the application and the selected backup type. Administrators need to be aware that certain applications do not support every DPM backup type. For example, Microsoft Virtual Server and some SQL Server databases do not support incremental backups.

Administrators need to be aware that certain applications do not support every DPM backup type.

For a synchronization job, System Center Data Protection Manager 2016 tracks application data changes and moves them to the DPM server, similar to an incremental backup. Updates are combined with the base replica to form the complete backup.

For an express full backup job, System Center Data Protection Manager 2016 uses a complete Volume Shadow Copy Service snapshot, but transfers only changed blocks to the DPM server. Each full backup creates a recovery point for the application’s data.

Generally, incremental synchronizations are faster to backup but can take longer to restore. To balance the time needed to restore content, DPM will periodically create full backups to integrate any collected changes, which speeds up a recovery. DPM can support up to 64 recovery points for each member of a protection group. However, DPM can also support up to 448 full backups and 96 incremental backups for each full backup.

The DPM recovery process is straightforward regardless of the backup type or target. Administrators select the desired recovery point with the Recovery Wizard in the DPM Administrator Console. DPM will restore the data from that point to the desired target or destination. The Recovery Wizard will denote the location and availability of the backup media. If the backup media — such as tape — is not available, the restoration process will fail.

Prevent Exchange Server virtualization deployment woes

are other measures administrators should take to keep the email flowing.

In my work as a consultant, I find many customers get a lot of incorrect information about virtualizing Exchange. These organizations often deploy Exchange on virtual hardware in ways that Microsoft does not support or recommend, which results in major performance issues. This tip will explain the proper way to deploy Exchange Server on virtual hardware and why it’s better to avoid cutting-edge hypervisor features.

When is Exchange Server virtualization the right choice?

The decision to virtualize a new Exchange deployment would be easy if the only concerns were technical. This choice gets difficult when politics enter the equation.

Email is one of the more visible services provided by an IT department. Apart from accounting systems, companies rely on email services more than other information technology. Problems with email availability can affect budgets, jobs — even careers.  

Some organizations spend a sizable portion of the IT department budget on the storage systems that run under the virtual platform. It may be a political necessity to use those expensive resources for high-visibility services such as messaging even when it is less expensive and overall a better technical answer to deploy Exchange on dedicated hardware. While I believe that the best Exchange deployment is almost always done on physical hardware — in accordance with the Preferred Architecture guidelines published by the Exchange engineering team — a customer’s requirements might steer the deployment to virtualized infrastructure.

How do I size my virtual Exchange servers?

Microsoft recommends sizing virtual Exchange servers the same way as physical Exchange servers. My recommendations for this procedure are:

  • Use the Exchange Server Role Requirements Calculator as if the intent was to build physical servers.
  • Take the results, and create virtual servers that are as close as possible to the results from the calculator.
  • Turn off any advanced virtualization features in the hypervisor.

Why should I adjust the hypervisor settings?

Some hypervisor vendors say that the X or Y feature in their product will help the performance or stability of virtualized Exchange. But keep in mind these companies want to sell a product. Some of those add-on offerings are beneficial, some are not. I have seen some of these vaunted features cause terrible problems in Exchange. In my experience, most stable Exchange Server deployments do not require any fancy virtualization features.

What virtualization features does Microsoft support?

Microsoft’s support statement for virtualization of Exchange 2016 is lengthy, but the essence is to make the Exchange VMs as close to physical servers as possible.

Microsoft does not support features that move a VM from one host to another unless the failover event results in cold boot of the Exchange Server. The company does not support features that allow resource sharing among multiple VMs of virtualized Exchange.

Where are the difficulties with Exchange Server virtualization?

The biggest problem with deploying Exchange on virtual servers is it’s often impossible to follow the proper deployment procedures, specifically with the validation of storage IOPS of a new Exchange Server with Jetstress. This tool checks that the storage hardware delivers enough IOPS to Exchange for a smooth experience.

Generally, a virtual host will use shared storage for the VMs it hosts. Running Jetstress on a new Exchange VM on that storage setup will cause an outage for other servers and applications. Due to this shared arrangement, it is difficult to gauge whether the storage equipment for a virtualized Exchange Server will provide sufficient performance.  

While it’s an acceptable practice to run Exchange Server on virtual hardware, I find it often costs more money and performs worse than a physical deployment. That said, there are often circumstances outside of the control of an Exchange administrator that require the use of virtualization.

To avoid trouble, try not to veer too far from Microsoft’s guidelines. The farther you stray from the company’s recommendations, the more likely you are to have problems.

December Patch Tuesday closes year on a relatively calm note

Administrators were greeted with a subdued December Patch Tuesday, a quiet end to what had been a somewhat tumultuous year early in 2017.

Of the 32 unique Common Vulnerabilities and Exposures (CVEs) that Microsoft addressed, just three patches were directly related to Windows operating systems. While not a critical exploit, the patch for CVE-2017-11885, which affects Windows client and server operating systems, is where administrators should focus their attention.

The patch is for a Remote Procedure Call (RPC) vulnerability for machines with the Routing and Remote Access service (RRAS) enabled. RRAS is a Windows service that allows remote workers to use a virtual private network to access internal network resources, such as files and printers.

“Anyone who has RRAS enabled is going to want to deploy the patch and check other assets to make sure RRAS is not enabled on any devices that don’t use it actively to prevent the exploitation,” said Gill Langston, director of product management at Qualys Inc., based in Redwood City, Calif.

The attacker triggers the exploit by running a specially crafted application against a Windows machine with RRAS enabled.

“Once the bad actor is on the endpoint, they can then install applications and run code,” Langston said. “They establish a foothold in the network, then see where they can spread. The more machines you have under your control, the more ability you have to move laterally within the organization.”

In addition, desktop administrators should roll out updates promptly to apply 19 critical fixes that affect the Internet Explorer and Edge browsers, Langston said.

“The big focus should be on browsers because of the scripting engine updates Microsoft seems to release every month,” he said. “These are all remote-code execution type vulnerabilities, so they’re all critical. That’s obviously a concern because that’s what people are using for browsing.”

Fix released for Windows Malware Protection Engine flaw

On Dec. 6, Microsoft sent out an update to affected Windows systems for a Windows Malware Protection Engine vulnerability (CVE-2017-11937). This emergency repair closed a security hole in Microsoft’s antimalware application, affecting systems on Windows 7, 8.1 and 10, and Windows Server 2016. Microsoft added this correction to the December Patch Tuesday updates.

“The fix happened behind the scenes … but it was recommended [for] administrators using any version of the Malware Protection Engine that it’s set to automatically update definitions and verify that they’re on version 1.1.14405.2, which is not vulnerable to the issue,” Langston said.

OSes that lack the update are susceptible to a remote-code execution exploit if the Windows Malware Protection Engine scanned a specially crafted file, which would give the attacker a range of access to the system. That includes the ability to view and delete data, and create a new account with full user rights.

Other affected Microsoft products include Exchange Server 2013 and 2016, Microsoft Forefront Endpoint Protection, Microsoft Security Essentials, Windows Defender and Windows Intune Endpoint Protection.

“Microsoft uses the Forefront engine to scan incoming email on Exchange 2013 and Exchange 2016, so they were part of this issue,” Langston said.

Lessons learned from WannaCry

Microsoft in May surprised many in IT when the company released patches for unsupported Windows XP and Windows Server 2003 systems to stem the tide of WannaCry ransomware attacks. Microsoft had closed this exploit for supported Windows systems in March, but it took the unusual step of releasing updates for OSes that had reached end of life.

Many of the Windows malware threats from early 2017 spawned from exploits found in the Server Message Block (SMB) protocol, which is used to share files on the network. The fact that approximately 400,000 machines got bit by the ransomware bug showed how difficult it is for IT to keep up with patching demands.

“WannaCry woke people back up to how critical it is to focus on your patch cycles,” Langston said.

More than three months elapsed between the time Microsoft first patched the SMB vulnerability in March that WannaCry exploited and when the Petya ransomware — which used the same SMB exploit — continued to compromise people. Some administrators might be lulled into a false sense of security from the cumulative update servicing model and delay the patching process, Langston said.

“They may delay because the next rollup will cover the updates they missed, but then that’s more time those machines are unprotected,” he said.

For more information about the remaining security bulletins for December Patch Tuesday, visit Microsoft’s Security Update Guide.

Tom Walat is the site editor for SearchWindowsServer. Write to him at twalat@techtarget.com or follow him @TomWalatTT on Twitter.