Tag Archives: review

How to Use Failover Clusters with 3rd Party Replication

In this second post, we will review the different types of replication options and give you guidance on what you need to ask your storage vendor if you are considering a third-party storage replication solution.

If you want to set up a resilient disaster recovery (DR) solution for Windows Server and Hyper-V, you’ll need to understand how to configure a multi-site cluster as this also provides you with local high-availability. In the first post in this series, you learned about the best practices for planning the location, node count, quorum configuration and hardware setup. The next critical decision you have to make is how to maintain identical copies of your data at both sites, so that the same information is available to your applications, VMs, and users.

Multi-Site Cluster Storage Planning

All Windows Server Failover Clusters require some type of shared storage to allow an application to run on any host and access the same data. Multi-site clusters behave the same way, but they require multiple independent storage arrays at each site, with the data replicated between them. The data for the clustered application or virtual machine (VM) on each site should use its own local storage array, or it could have significant latency if each disk IO operation had to go to the other location.

If you are running Hyper-V VMs on your multi-site cluster, you may wish to use Cluster Shared Volumes (CSV) disks. This type of clustered storage configuration is optimized for Hyper-V and allows multiple virtual hard disks (VHDs) to reside on the same disk while allowing the VMs to run on different nodes. The challenge when using CSV in a multi-site cluster is that the VMs must make sure that they are always writing to their disk in their site, and not the replicated copy. Most storage providers offer CSV-aware solutions, and you must make sure that they explicitly support multi-site clustering scenarios. Often the vendors will force writes at the primary site by making the CSV disk at the second site read-only, to ensure that the correct disks are always being used.

Understanding Synchronous and Asynchronous Replication

As you progress in planning your multi-site cluster you will have to select how your data is copied between sites, either synchronously or asynchronously. With asynchronous replication, the application will write to the clustered disk at the primary site, then at regular intervals, the changes will be copied to the disk at the secondary site. This usually happens every few minutes or hours, but if a site fails between replication cycles, then any data from the primary site which has not yet been copied to the secondary site will be lost. This is the recommended configuration for applications that can sustain some amount of data loss, and this generally does not impose any restrictions on the distance between sites. The following image shows the asynchronous replication cycle.

Asynchronous Replication in a Multi-Site Cluster

Asynchronous Replication in a Multi-Site Cluster

With synchronous replication, whenever a disk write command occurs on the primary site, it is then copied to the secondary site, and an acknowledgment is returned to both the primary and secondary storage arrays before that write is committed. Synchronous replication ensures consistency between both sites and avoids data loss in the event that there is a crash between a replication cycle. The challenge of writing to two sets of disks in different locations is that the physical distance between sites must be close or it can affect the performance of the application. Even with a high-bandwidth and low-latency connection, synchronous replication is usually recommended only for critical applications that cannot sustain any data loss, and this should be considered with the location of your secondary site.  The following image shows the asynchronous replication cycle.

Synchronous Replication in a Multi-Site Cluster

Synchronous Replication in a Multi-Site Cluster

As you continue to evaluate different storage vendors, you may also want to assess the granularity of their replication solution. Most of the traditional storage vendors will replicate data at the block-level, which means that they track specific segments of data on the disk which have changed since the last replication. This is usually fast and works well with larger files (like virtual hard disks or databases), as only blocks that have changed need to be copied to the secondary site. Some examples of integrated block-level solutions include HP’s Cluster Extension, Dell/EMC’s Cluster Enabler (SRDF/CE for DMX, RecoverPoint for CLARiiON), Hitachi’s Storage Cluster (HSC), NetApp’s MetroCluster, and IBM’s Storage System.

There are also some storage vendors which provide a file-based replication solution that can run on top of commodity storage hardware. These providers will keep track of individual files which have changed, and only copy those. They are often less efficient than the block-level replication solutions as larger chunks of data (full files) must be copied, however, the total cost of ownership can be much less. A few of the top file-level vendors who support multi-site clusters include Symantec’s Storage Foundation High Availability, Sanbolic’s Melio, SIOS’s Datakeeper Cluster Edition, and Vision Solutions’ Double-Take Availability.

The final class of replication providers will abstract the underlying sets of storage arrays at each site. This software manages disk access and redirection to the correct location. The more popular solutions include EMC’s VPLEX, FalconStor’s Continuous Data Protector and DataCore’s SANsymphony. Almost all of the block-level, file-level, and appliance-level providers are compatible with CSV disks, but it is best to check that they support the latest version of Windows Server if you are planning a fresh deployment.

By now you should have a good understanding of how you plan to configure your multi-site cluster and your replication requirements. Now you can plan your backup and recovery process. Even though the application’s data is being copied to the secondary site, which is similar to a backup, it does not replace the real thing. This is because if the VM (VHD) on one site becomes corrupted, that same error is likely going to be copied to the secondary site. You should still regularly back up any production workloads running at either site.  This means that you need to deploy your cluster-aware backup software and agents in both locations and ensure that they are regularly taking backups. The backups should also be stored independently at both sites so that they can be recovered from either location if one datacenter becomes unavailable. Testing recovery from both sites is strongly recommended. Altaro’s Hyper-V Backup is a great solution for multi-site clusters and is CSV-aware, ensuring that your disaster recovery solution is resilient to all types of disasters.

If you are looking for a more affordable multi-site cluster replication solution, only have a single datacenter, or your storage provider does not support these scenarios, Microsoft offers a few solutions. This includes Hyper-V Replica and Azure Site Recovery, and we’ll explore these disaster recovery options and how they integrate with Windows Server Failover Clustering in the third part of this blog series.

Let us know if you have any questions in the comments form below!


Go to Original Article
Author: Symon Perriman

Finalized TLS 1.3 update has been published at last

The finalized and completed version of TLS 1.3 was published last week following a lengthy draft review process.

The Internet Engineering Task Force (IETF) published the latest version of the Transport Layer Security protocol used for internet encryption and authentication on Friday, Aug. 10, 2018, after starting work on it in April 2014. The final draft, version 28, was approved in March. It replaces the previous standard, TLS 1.2, which was published in RFC 5246 in August 2008. Originally based on the Secure Sockets Layer protocol, the new version of TLS has been revised significantly.

“The protocol [TLS 1.3] has major improvements in the areas of security, performance, and privacy,” IETF wrote in a blog post.

Specifically, TLS 1.3 “provides additional privacy for data exchanges by encrypting more of the negotiation handshake to protect it from eavesdroppers,” compared with TLS 1.2, IETF explained. “This enhancement helps protect the identities of the participants and impede traffic analysis.”

TLS 1.3 also has forward secrecy by default, so current communications will stay secured even if future communications are compromised, according to IETF.

“With respect to performance, TLS 1.3 shaves an entire round trip from the connection establishment handshake,” IETF wrote in its blog post announcing the finalized protocol. “In the common case, new TLS 1.3 connections will complete in one round trip between client and server.”

As a result, TLS 1.3 is expected to be faster than TLS 1.2. It will also remove outdated cryptography, such as the RSA key exchange, 3DES and static Diffie-Hellman, and thus free TLS 1.3 of the vulnerabilities that plagued TLS 1.2, such as FREAK and Logjam.

“Although the previous version, TLS 1.2, can be deployed securely, several high profile vulnerabilities have exploited optional parts of the protocol and outdated algorithms,” IETF wrote. “TLS 1.3 removes many of these problematic options and only includes support for algorithms with no known vulnerabilities.”

And, as Mozilla explained in a blog post, “TLS 1.3 is designed in cooperation with the academic security community and has benefitted from an extraordinary level of review and analysis. This included formal verification of the security properties by multiple independent groups; the TLS 1.3 RFC cites 14 separate papers analyzing the security of various aspects of the protocol.”

TLS 1.3 has already been widely deployed, according to Mozilla. The Firefox and Google Chrome browsers have draft versions deployed, with final version deployments on the way. And Cloudflare, Google and Facebook have also partially deployed the protocol.

How to Create Automated Hyper-V Performance Reports

Wouldn’t it be nice to periodically get an automatic performance review of your Hyper-V VMs? Well, this blog post shows you how to do exactly that.

Hyper-V Performance Counters & Past Material

Over the last few weeks, I’ve been working with Hyper-V performance counters and PowerShell, developing new reporting tools. I thought I’d write about Hyper-V Performance counters here until I realized I already have.
https://www.altaro.com/hyper-v/performance-counters-hyper-v-and-powershell-part-1/
https://www.altaro.com/hyper-v/hyper-v-performance-counters-and-powershell-part-2/
Even though I wrote these articles several years ago, nothing has really changed. If you aren’t familiar with Hyper-V performance counters I encourage you to take a few minutes and read these. Otherwise, some of the material in this article might not make sense.

Get-CimInstance

Normally, using Get-Counter is a better approach, especially if you want to watch performance over a given timespan. But sometimes you just want a quick point in time snapshot. Or you may have network challenges. As far as I can tell Get-Counter uses legacy networking protocols, i.e. RPC and DCOM. This does not make them very firewall friendly. You could use PowerShell Remoting and Invoke-Command to run Get-Counter on the remote server. Or you can use Get-CimInstance which is what I want to cover in this article.

When you run Get-Counter, you are actually querying performance counter classes in WMI. This means you can get the same information using Get-CimInstance, or Get-WmiObject. But because we want to leverage WSMan and PowerShell Remoting, we’ll stick with the former.

Building a Hyper-V Performance Report

First, we need to identify the counter classes. I’ll focus on the classes that have “cooked” or formatted data.

I’m setting a variable for the computername so that you can easily re-use the code. I’m demonstrating this on a Windows 10 desktop running Hyper-V but you can just as easily point $Computer to a Hyper-V host.

It is pretty easy to leverage the PowerShell pipeline and create a report for all Hyper-V performance counters.

The text file will list each performance counter class followed by all instances of that class. If you run this code, you’ll see there are a number of properties that won’t have any values. It might help to filter those out. Here’s a snippet of code that is a variation on the text file. This code creates an HTML report, skipping properties that likely will have no value.

This code creates an HTML report using fragments. I also am dynamically deciding to create a table or a list based on the number of properties.

HTML Performance Counter Report

Thus far I’ve been creating reports for all performance counters and all instances. But you might only be interested in a single virtual machine. This is a situation where you can take advantage of WMI filtering.

In looking at the output from all classes, I can see that the Name property on these classes can include the virtual machine name as part of the value. So I will go through every class and filter only for instances that contain the name of VM I want to monitor.

This example also adds a footer to the report showing when it was created.

HTML Performance Report for a Single VM

It doesn’t take much more effort to create a report for each virtual machine. I turned my code into the beginning of a usable PowerShell function, designed to take pipeline input.

With this function, I can query as many virtual machines and create a performance report for each.

You can take this idea a step further and run this as a PowerShell scheduled job, perhaps saving the report files to an internal team web server.

I have at least one other intriguing PowerShell technique for working with Hyper-V performance counters, but I think I’ve given you enough to work with for today so I’ll save it until next time.

Wrap-Up

Did you find this useful? Have you done something similar and used a different method? Let us know in the comments section below!

Thanks for Reading!

Mixed Reality @ Microsoft – June 2018 Update – Windows Experience Blog

Recent Microsoft-Harvard Business Review survey shows 87 percent of respondents are currently exploring, piloting, or deploying mixed reality in their company.

Hey everyone — I hope this month’s blog post finds you well!

Today, we are welcoming the solstice in the U.S., and I am very much looking forward to summer in Seattle. In addition to some planned vacation time, I will also be working with our team and partners on some exciting product development efforts for mixed-reality business applications. I can’t wait to share more about that in the coming months!

But before we look too far ahead, June has already been filled with some cool mixed-reality moments.

Earlier this month my colleagues Dio Gonzalez and Katie Kelly presented at the sixth annual Augmented World Expo (AWE) in Santa Clara, California. I was encouraged but not at all surprised to hear from them about the tremendous growth of the conference, with many more incredible and varied AR solutions than ever before. This mirrors the signals we’ve long observed at Microsoft and aligns with the level of activity we continue to see in this space: Mixed-reality technology is increasingly providing demonstrable value across a wide range of workplace scenarios, which is fueling further interest from developers and businesses alike. AWE is a great conference, and I hope to be able to join again next year.

Supporting this observation, Microsoft recently partnered with Harvard Business Review Analytic Services to conduct a survey investigating the unique role and importance of mixed reality within the context of the modern workplace. This research surveyed 394 executives of companies with more than 250 employees each and spanning several industries, from manufacturing, engineering, and construction to retail, defense, and education.

The results—which you can read here—were released today, and the findings are fascinating: Among a great many observations, we learned that 87 percent of respondents are currently exploring, piloting, or deploying mixed reality in their company work flows. Similarly, 68 percent of respondents believe that mixed reality will play an important role in helping to achieve their companies’ strategic goals over the next 18 months.

The survey results identified several exciting areas of opportunity in the growing mixed-reality space.

One of the key opportunities is with Firstline Workers, who make up 80 percent of the workforce but often have limited access to relevant, contextual information due to the on-the-field nature of their jobs. These are the workers who are typically on the frontlines of any business workflow: behind the counters, in the clinics, traveling between customers for field service, or on the factory floors. Several of Microsoft’s commercial customers, for instance, are already empowering their Firstline Workers today with mixed-reality solutions that enable remote assistance, spatial planning, environmentally contextual data, and much more. Mixed reality allows these Firstline Workers to conduct their usual, day-to-day activities with the added benefit of heads-up, hands-free access to incredibly valuable, contextual information.

Lastly, a couple of days ago Alex Kipman spoke about mixed reality in the modern workplace at LiveWorx in Boston. LiveWorx brings together BDMs, engineers, and developers to learn about the tools available to help drive digital transformation in the workplace – such as IoT, mixed reality, and robotics.

Given our mission to help empower people and companies to achieve more, the conference was a great fit for our team. Alex hit on Microsoft’s strategy for mixed reality, in particular how it will serve to accelerate our ambition for an Intelligent Cloud and an Intelligent Edge. For those who have been with us on our mixed-reality journey, and for those who are just joining us, his fireside chat with Jon Fortt is a must-watch.

I am already looking forward to next month’s blog. In the meantime, as always, I’m available on Twitter (@lorrainebardeen) and eager to hear about what you’re doing with mixed reality.

Talk soon!

Lorraine

I heart MR on a blue and white background

Mixed Reality @ Microsoft – June 2018 Update – Windows Experience Blog

Recent Microsoft-Harvard Business Review survey shows 87 percent of respondents are currently exploring, piloting, or deploying mixed reality in their company.

Hey everyone — I hope this month’s blog post finds you well!

Today, we are welcoming the solstice in the U.S., and I am very much looking forward to summer in Seattle. In addition to some planned vacation time, I will also be working with our team and partners on some exciting product development efforts for mixed-reality business applications. I can’t wait to share more about that in the coming months!

But before we look too far ahead, June has already been filled with some cool mixed-reality moments.

Earlier this month my colleagues Dio Gonzalez and Katie Kelly presented at the sixth annual Augmented World Expo (AWE) in Santa Clara, California. I was encouraged but not at all surprised to hear from them about the tremendous growth of the conference, with many more incredible and varied AR solutions than ever before. This mirrors the signals we’ve long observed at Microsoft and aligns with the level of activity we continue to see in this space: Mixed-reality technology is increasingly providing demonstrable value across a wide range of workplace scenarios, which is fueling further interest from developers and businesses alike. AWE is a great conference, and I hope to be able to join again next year.

Supporting this observation, Microsoft recently partnered with Harvard Business Review Analytic Services to conduct a survey investigating the unique role and importance of mixed reality within the context of the modern workplace. This research surveyed 394 executives of companies with more than 250 employees each and spanning several industries, from manufacturing, engineering, and construction to retail, defense, and education.

The results—which you can read here—were released today, and the findings are fascinating: Among a great many observations, we learned that 87 percent of respondents are currently exploring, piloting, or deploying mixed reality in their company work flows. Similarly, 68 percent of respondents believe that mixed reality will play an important role in helping to achieve their companies’ strategic goals over the next 18 months.

The survey results identified several exciting areas of opportunity in the growing mixed-reality space.

One of the key opportunities is with Firstline Workers, who make up 80 percent of the workforce but often have limited access to relevant, contextual information due to the on-the-field nature of their jobs. These are the workers who are typically on the frontlines of any business workflow: behind the counters, in the clinics, traveling between customers for field service, or on the factory floors. Several of Microsoft’s commercial customers, for instance, are already empowering their Firstline Workers today with mixed-reality solutions that enable remote assistance, spatial planning, environmentally contextual data, and much more. Mixed reality allows these Firstline Workers to conduct their usual, day-to-day activities with the added benefit of heads-up, hands-free access to incredibly valuable, contextual information.

Lastly, a couple of days ago Alex Kipman spoke about mixed reality in the modern workplace at LiveWorx in Boston. LiveWorx brings together BDMs, engineers, and developers to learn about the tools available to help drive digital transformation in the workplace – such as IoT, mixed reality, and robotics.

Given our mission to help empower people and companies to achieve more, the conference was a great fit for our team. Alex hit on Microsoft’s strategy for mixed reality, in particular how it will serve to accelerate our ambition for an Intelligent Cloud and an Intelligent Edge. For those who have been with us on our mixed-reality journey, and for those who are just joining us, his fireside chat with Jon Fortt is a must-watch.

I am already looking forward to next month’s blog. In the meantime, as always, I’m available on Twitter (@lorrainebardeen) and eager to hear about what you’re doing with mixed reality.

Talk soon!

Lorraine

I heart MR on a blue and white background