How to Enable Advanced Threat Protection in Microsoft 365

As more of the workforce connects from their homes, there has been a spike in usage for remote productivity services. Many organizations are giving Microsoft Office 365 subscriptions to all of their staff, using more collaboration tools from Outlook, OneDrive, SharePoint, and Teams.

Unfortunately, this is creating new security vulnerabilities with more untrained workers being attacked by malware or ransomware through attachments, links, or phishing attacks.

This article will provide you with an overview of how Microsoft Office 365 Advanced Threat Protection (ATP) can help protect your organization, along with links to help you enable each service.

ATP is included in the Microsoft Office 365 Business Premium, Enterprise E5, and Education A5 subscriptions, but it can be added to almost any subscription. For additional information about ATP and Microsoft Office 365 security, check out Altaro’s upcoming webinar Critical Security Features in Microsoft Office 365 Admins Simply Can’t Ignore.

What is Advanced Threat Protection?

Microsoft Office 365 now comes with the Advanced Threat Protection service which secures emails, attachments, and files by scanning them for threats. This cloud service uses the latest in machine learning from the millions of mailboxes it protects to proactively detect and resolve common attacks. This technology has also been extended beyond just email to protect many other components of the Microsoft Office suite. In addition to ATP leveraging Microsoft’s global knowledge base, your organization can use ATP to create your own policies, investigate unusual activity, simulate threats, automate responses, and view reports.

Microsoft Advanced Threat Protection

Advanced Threat Protection (Source: Microsoft techcommunity)

Safe Links

Microsoft Office 365 ATP helps your users determine if a link is safe when using Outlook, Teams, OneNote, Word, Excel, PowerPoint and Visio. Malicious or misleading links are a common method for hackers to direct unsuspecting users to a site that can steal their information. These emails are often disguised to look like they are coming from a manager or the IT staff within the company. ATP will automatically scan links in emails and cross-reference them to a public or customized list of dangerous URLs. If a user tries to click on the malicious link, it will give them a warning so that they understand the risk if they continue to visit the website.

How to enable ATP Safe Links

Safe Attachments

One of the most common ways which your users will get attacked is by opening an attachment that is infected with malware. When the file is opened, it could execute a script that could steal passwords or lock up the computer unless a bounty is paid, in what is commonly known as a ransomware attack. ATP will automatically scan all attachments to determine if any known virus is detected. You and your users will be notified about anything suspicious to help you avoid any type of infection.

How to enable ATP Safe Attachments

Anti-Phishing Policies

When ATP anti-phishing is enabled, all incoming messages will be analyzed for possible phishing attacks. Microsoft Office 365 uses cloud-based AI to look for unusual or suspicious message elements, such as mismatched descriptions, links, or domains. Whenever an alert is triggered, the user is immediately warned, and the alert is logged so that it can be reviewed by an admin.

How to enable ATP Anti-Phishing

Real-time Detection & Reports

Approved users will have access to the ATP dashboard along with reports about recent threats. These reports contain detailed information about malware, phishing attacks, and submissions. A Malware Status Report will allow you to see malware detected by type, method, and the status of each message with a threat. The URL Protection Status Report will display the number of threats discovered for each hyperlink or application and the resulting action taken a user. The ATP Message Disposition report shows the different types of malicious file attachments actions in messages. The Email Security Reports include details about the top senders, recipients, spoofed mail, and spam detection.

How to view all the various ATP reports. Note: there are some more advanced reports which must be triggered through a PowerShell cmdlet.

Threat Explorer

Another important component of ATP is the Threat Explorer which allows admins or authorized users to get real-time information about active threats in the environment through a GUI console. It allows you to preview an email header and download an email body, and for privacy reasons, this is only permitted if permission is granted through role-based access control (RBAC). You can then trace any copies of this email throughout your environment to see whether it has been routed, delivered, blocked, replaced, failed, dropped, or junked. You can even view a timeline of the email to see how it has been accessed over time by recipients in your organization. Some users can even report suspicious emails and you can use this dashboard to view these messages.

How to enable ATP Threat Explorer

Threat Trackers

Microsoft Office 365 leverages its broad network of endpoints to identify and report on global attacks. Administrators can add any Threat Tracker widgets which they want to follow to their dashboard through the ATP interface. This allows you to track major threats attacking your region, industry, or service type.

How to enable ATP Threat Trackers

Automated Incident Response

Another great security feature from Microsoft Office 365 ATP is the ability to automatically investigate well-known threats. Once a threat is detected, the Automated Incident Response (AIR) feature will try to categorize it and start remediating the issue based on the industry-standard best practices. This could include providing recommendations, quarantining, or deleting the infected file or message.

How to use Automate Incident Response (AIR)

Attack Simulator

One challenge that many organizations experience when developing a protection policy is their inability to test how their users would actually respond to an attempted attack. The ATP Attack Simulator is a utility that authorized administrators can use to create artificial phishing and password attacks. These fake email campaigns try to identify and then educate vulnerable users by convincing them to perform an action that could expose them to a hacker. This utility can run a Spear Phishing Campaign, Brute Force Attack, and a Password Spray Attack.

How to enable the ATP Attack Simulator

This diverse suite of tools, widgets, and simulators can help admins protect their remote workforce from the latest attacks. Microsoft has taken its artificial intelligence capabilities to learn how millions of mailboxes are sharing information, and use this to harden the security of their entire platform.

If you want to learn more about Microsoft Office 365 ATP and Microsoft Office 365 in general, attend the upcoming Altaro webinar on May 27. I will be presenting that along with Microsoft MVP Andy Syrewicze so it’s your chance to ask me any questions you might have about ATP or other Microsoft Office 365 security features live! It’s a must-attend for all admins – save your seat now

Microsoft Office 365 ATP Altaro Webinar

Is Your Office 365 Data Secure?

Did you know Microsoft does not back up Office 365 data? Most people assume their emails, contacts and calendar events are saved somewhere but they’re not. Secure your Office 365 data today using Altaro Office 365 Backup – the reliable and cost-effective mailbox backup, recovery and backup storage solution for companies and MSPs. 

Start your Free Trial now

Go to Original Article
Author: Symon Perriman

Managing Mailbox Retention and Archiving Policies in Microsoft 365

Microsoft 365 (formerly Office 365) provides a wide set of options for managing data classification, retention of different types of data, and archiving data. This article will show the options a Microsoft 365 administrator has when setting up retention policies for Exchange, SharePoint, and other Microsoft 365 workloads and how those policies affect users in Outlook. It’ll also cover the option of an Online Archive Mailbox and how to set one up.

There’s also an accompanying video to this article which shows you how to configure a retention policy, retention labels, enabling Archive mailboxes, and creating a move to archive retention tag.

[embedded content]

Before we continue, we know that for all Microsoft 365 admins security is a priority. And in the current climate of COVID-19, it’s well documented how hackers are working around the clock to exploit vulnerabilities. As such, we assembled two Microsoft experts to discuss the critical security features in Microsoft 365 you should be using right now in a free webinar on May 27. Don’t miss out on this must-attend event – save your seat now!

How To Manage Retention Policies in Microsoft 365

There are many reasons to consider labeling data and using retention policies but before we discuss these let’s look at how Office 365 manages your data in the default state. For Exchange Online (where mailboxes and Public Folders are stored if you use them), each database has at least four copies, spread across two datacenters. One of these copies is a lagged copy which means the replication to it is delayed, to provide the option to recover from a data corruption issue. In short, a disk, server, rack, or even datacenter failure isn’t going to mean that you lose your mailbox data.

Further, the default policy (for a few years now) is that deleted items in Outlook stay in the Deleted Items folder “forever”, until you empty it, or they are moved to an archive mailbox. If an end-user deletes items out of their Deleted Items folder, they’re kept for another 30 days (as long as the mailbox was created in 2017 or later), meaning the user can recover it, by opening the Deleted Items folder and clicking the link.

Where to find recoverable items in Outlook, Microsoft 365

Where to find recoverable items in Outlook

This opens the dialogue box where a user can recover one or more items.

Recovering deleted items in Exchange Online, Microsoft 365

Recovering deleted items in Exchange Online

If an administrator deletes an entire mailbox it’s kept in Exchange Online for 30 days and you can recover it by restoring the associated user account.

Additionally, it’s also important to realize that Microsoft does not back up your data in Microsoft 365. Through native data protection in Exchange and SharePoint online they make sure that they’ll never lose your current data but if you have deleted an item, document or mailbox for good, it’s gone. There’s no secret place where Microsoft’s support can get it back from (although it doesn’t hurt to try), hence the popularity of third-party backup solutions such as Altaro Office 365 Backup.

Litigation Hold – the “not so secret” secret

One option that I have seen some administrators employ is to use litigation or in-place hold (the latter feature is being retired in the second half of 2020) which keeps all deleted items in a hidden subfolder of the Recoverable Items folder until the hold lapses (which could be never if you make it permanent). Note that you need at least an E3 or Exchange Online Plan 2 for this feature to be available. This feature is designed to be used when a user is under some form of investigation and ensures that no evidence can be purged by that user and it’s not designed as a “make sure nothing is ever deleted” policy. However, I totally understand the job security it can bring when the CEO is going ballistic because something super important is “gone”.

Litigation hold settings for a mailbox, Microsoft 365

Litigation hold settings for a mailbox

Retention Policies

If the default settings and options described above doesn’t satisfy the needs of your business or regulatory requirements you may have, the next step is to consider retention policies. A few years ago, there were different policy frameworks for the different workloads in Office 365, showing the on-premises heritage of Exchange and SharePoint. Thankfully we now have a unified service that spans most Office 365 workloads. Retention in this context refers to ensuring that the data can’t be deleted until the retention period expires.

There are two flavors here, label policies which publish labels to your user base, letting users pick a retention policy by assigning individual emails or documents a label (only one label per piece of content). Note that labels can do two things that retention policies can’t do, firstly they can apply from the date the content was labeled, and secondly, you can trigger a disposition / manual review of the SharePoint or OneDrive for Business document when the retention expires.

Labels only apply to objects that you label; it doesn’t retroactively scan through email or documents at rest. While labels can be part of a bigger data classification story, my recommendation is that anything that relies on users remembering to do something extra to manage data will only work with extensive training and for a small subset of very important data. You can (if you have E5 licensing for the users in question) use label policies to automatically apply labels to sensitive content, based on a search query you build (particular email subject lines or recipients or SharePoint document types in particular sites for instance) or to a set of trainable classifiers for offensive language, resumes, source-code, harassment, profanity, and threats. You can also apply a retention label to a SharePoint library, folder, or document set.

As an aside, Exchange Online also has personal labels that are similar to retention labels but created by users themselves instead of being created and published by administrators.

A more holistic flavor, in my opinion, is retention policies. These apply to all items stored in the various repositories and can apply across several different workloads. Retention policies can also both ensure that data is retained for a set period of time AND disposed of after the expiry of the data, which is often a regulatory requirement. A quick note here if you’re going to play around with policies is that they’re not instantaneously applied – it can take up to 24 hours or even 7 days, depending on the workload and type of policy – so prepare to be patient.

These policies can apply across Exchange, SharePoint (which means files stored in Microsoft 365 Groups, Teams, and Yammer), OneDrive for business, and IM conversations in Skype for Business Online / Teams and Groups. Policies can be broad and apply across several workloads, or narrow and only apply to a specific workload or location in that workload. An organization-wide policy can apply to the workloads above (except Teams, you need a separate policy for its content) and you can have up to 10 of these in a tenant. Non-org wide policies can be applied to specific mailboxes, sites, or groups or you can use a search query to narrow down the content that the policy applies to. The limits are 10,000 policies in a tenant, each of which can apply to up to 1000 mailboxes or 100 sites.

Especially with org-wide policies be aware that they apply to ALL selected content so if you set it to retain everything for four years and then delete it, data is going to automatically start disappearing after four years. Note that you can set the “timer” to start when the content is created or when it was last modified, the latter is probably more in line with what people would expect, otherwise, you could have a list that someone updates weekly disappear suddenly because it was created several years ago.

To create a retention policy login to the Microsoft 365 admin center, expand Admin centers, and click on Compliance. In this portal click on Policies and then Retention under Data.

Retention policies link in the Compliance portal, Microsoft 365

Retention policies link in the Compliance portal

Select the Retention tab and click New retention policy.

Retention policies and creating a new one, Microsoft 365

Retention policies and creating a new one

Give your policy a name and a description, select which data stores it’s going to apply to and whether the policy is going to retain and then delete data or just delete it after the specified time.

Retention settings in a policy, Microsoft 365

Retention settings in a policy

Outside of the scope of this article but related are sensitivity labels, instead of classifying data based on how long it should be kept, these policies classify data based on the security needs of the content. You can then apply policies to control the flow of emails with this content, or automatically encrypt documents in SharePoint for instance. You can also combine sensitivity and retention labels in policies.


Since there can be multiple policies applied to the same piece of data and perhaps even retention labels in play there could be a situation where conflicting settings apply. Here’s how these conflicts are resolved.

Retention wins over deletion, making sure that nothing is deleted that you expected to be retained and the longest retention period wins. If one policy says two years and another says five years, it’ll be kept for five. The third rule is that explicit wins over implicit so if a policy has been applied to a specific area such as a SharePoint library it’ll take precedence over an organization-wide general policy. Finally, the shortest deletion policy wins so that if an administrator has made a choice to delete content after a set period of time, it’ll be deleted then even if another policy applies that requires deletion after a longer period of time. Here’s a graphic that shows the four rules and their interaction:

Policy conflict resolution rules. Microsoft 365

Policy conflict resolution rules (courtesy of Microsoft)

As you can see, building a set of retention policies that really work for your business and don’t unintentionally cause problems is a project for the whole business, working out exactly what’s needed across different workloads, rather than the job of a “click-happy” IT administrator.

Archive Mailbox

It all started with trying to rid the world of PST stored emails. Back in the day, when hard drive and SAN storage only provided small amounts of storage, many people learnt to “expand” the capacity of their small mailbox quota with local PST files. The problem is that these local files aren’t backed up and aren’t included in regulatory or eDiscovery searches. Office 365 largely solved part of this problem by providing generous quotas, the Business plans provide 50 GB per mailbox whereas the Enterprise plans have 100 GB limits.

If you need more mailbox storage one option is to enable online archiving which provides another 50 GB mailbox for the Business plans and an unlimited (see below) mailbox for the Enterprise plans. There are some limitations on this “extra” mailbox, it can only be accessed online, and it’s never synchronized to your offline (OST) file in Outlook. When you search for content you must select “all mailboxes” to see matches in your archive mailbox. ActiveSync and the Outlook client on Android and iOS can’t see the archive mailbox and users may need to manually decide what to store in which location (unless you’ve set up your policies correctly).

For these reasons many businesses avoid archive mailboxes altogether, just making sure that all mailbox data is stored in the primary mailbox (after all, 100 GB is quite a lot of emails). Other businesses, particularly those with a lot of legacy PST storage find these mailboxes fantastic and use either manual upload or even drive shipping to Microsoft 365 to convert all those PSTs to online archives where the content isn’t going to disappear because of a failed hard drive and where eDiscovery can find it.

For those that really need it and are on E3 or E5 licensing you can also enable auto-expanding archives which will ensure that as you use up space in an online archive mailbox, additional mailboxes will be created behind the scenes to provide effectively unlimited archival storage.

To enable archive mailboxes, go to Security & Compliance Center, click on Information governance, and the Archive tab.

The Archive tab, Microsoft 365

The Archive tab

Click on a user’s name to be able to enable the archive mailbox.

Archive mailbox settings, Mod admin, Microsoft 365

Archive mailbox settings

Once you have enabled archive mailboxes, you’ll need a policy to make sure that items are moved into at the cadence you need. Go to the Exchange admin center and click on Compliance management – Retention tags.

Exchange Admin Center - Retention tags, Microsoft 365

Exchange Admin Center – Retention tags

Here you’ll find the Default 2 year move to archive tag or you can create a new policy by clicking on the + sign.

Exchange Retention tags default policies, Microsoft 365

Exchange Retention tags default policies

Pick Move to Archive as the action, give the policy a name and select the number of days that has to pass before the move happens.

Creating a custom Move to archive policy, Microsoft 365

Creating a custom Move to archive policy

Note that online archive mailboxes have NOTHING to do with the Archive folder that you see in the folder tree in Outlook, this is just an ordinary folder that you can move items into from your inbox for later processing. This Archive folder is available on mobile clients and also when you’re offline and you can swipe in Outlook mobile to automatically store emails in it.


Now you know how and when to apply retention policies and retention tags in Microsoft 365, as well as when online archive mailboxes are appropriate and how to enable them and configure policies to archive items.

Finally, if you haven’t done so already, remember to save your seat on our upcoming must-attend webinar for all Microsoft 365 admins:

Critical Security Features in Office/Microsoft 365 Admins Simply Can’t Ignore

Is Your Office 365 Data Secure?

Did you know Microsoft does not back up Office 365 data? Most people assume their emails, contacts and calendar events are saved somewhere but they’re not. Secure your Office 365 data today using Altaro Office 365 Backup – the reliable and cost-effective mailbox backup, recovery and backup storage solution for companies and MSPs. 

Start your Free Trial now

Go to Original Article
Author: Paul Schnackenburg

How to Recover Deleted Emails in Microsoft 365

When the CEO realizes they deleted a vital email thread three weeks ago, email recovery becomes suddenly becomes an urgent task. Sure, you can look in the Deleted Items folder in Outlook, but beyond that, how can you recover what has undergone “permanent” deletion? In this article, we review how you can save the day by bringing supposedly unrecoverable email back from the great beyond.

Before we continue, we know that for all Microsoft 365 admins security is a priority. And in the current climate of COVID-19, it’s well documented how hackers are working around the clock to exploit vulnerabilities. As such, we assembled two Microsoft experts to discuss the critical security features in Microsoft 365 you should be using right now in a free webinar on May 27. Don’t miss out on this must-attend event – save your seat now!

Now onto saving your emails!

Deleted Email Recovery in Microsoft And Office 365

Email Recovery for Outlook in Exchange Online through Microsoft and Office can be as simple as dragging and dropping the wayward email from the Deleted Items folder to your Inbox. But what do you do when you can’t find the email you want to recover?

First, let’s look at how email recovery is structured in Microsoft 365. There are few more layers here than you might think! In Microsoft 365, deleted email can be in one of three states: Deleted, Soft-Deleted, or Hard-Deleted. The way you recover email and how long you have to do so depends on the email’s delete status and the applicable retention policy.

Email Recovery in Microsoft 365

Let’s walk through the following graphic and talk about how email gets from one state to another, the default policies, how to recover deleted email in each state, and a few tips along the way.

Items vs. Email

Outlook is all about email yet also has tasks, contacts, calendar events, and other types of information. For example, you can delete calendar entries and may be called on to recover them, just like email. For this reason, the folder for deleted content is called “Deleted Items.” Also, when discussing deletions and recovery, it is common to refer to “items” rather than limiting the discussion to just email.


Various rules control the retention period for items in the different states of deletion. A policy is an automatically applied action that enforces a rule related to services. Microsoft 365 has hundreds of policies you can tweak to suit your requirements. See Overview of Retention policies for more information.

‘Deleted Items’ Email

When you press the Delete key on an email in Outlook, it’s moved to the Deleted Items folder. That email is now in the “Deleted” state, which simply means it moved to the Deleted Items folder. How long does Outlook retain deleted email? By default – forever! You can recover your deleted mail with just a drag and drop to your Inbox. Done!

If you can’t locate the email in the Deleted Items folder, double-check that you have the Deleted Items folder selected, then scroll to the bottom of the email list. Look for the following message:

Outlook Deleted Items Folder

If you see the above message, your cache settings may be keeping only part of the content in Outlook and rest in the cloud. The cache helps to keep mailbox sizes lower on your hard drive, which in turn speeds up search and load times. Click on the link to download the missing messages.

But I Didn’t Delete It!

If you find content in the Deleted Items and are sure you did not delete it, you may be right! Administrators can set Microsoft 365 policy to delete old Inbox content automatically.

Mail can ‘disappear’ another way. Some companies enable a personal archive mailbox for users. When enabled, by default, any mail two years or older will “disappear” from your Inbox and the Deleted Items folder. However, there is no need to worry. While apparently missing, the email has simply moved to the Archives Inbox. A personal Archives Inbox shows up as a stand-alone mailbox in Outlook, as shown below.

Stand-alone mailbox in Outlook

As a result, it’s a good idea to search the Archives Inbox, if it is present when searching for older messages.

Another setting to check is one that deletes email when Outlook is closed. Access this setting in Outlook by clicking “File,” then “Options,” and finally “Advanced” to display this window:

Outlook Advanced Options

If enabled, Outlook empties the Deleted Items when closed. The deleted email then moves to the ‘soft-delete’ state, which is covered next. Keep in mind that with this setting, all emails will be permanently deleted after 28 days

‘Soft-Deleted’ Email

The next stage in the process is Soft-Deleted. Soft-Deleted email is in the Deleted-Items folder but is still easily recovered. At a technical level, the mail is deleted locally from Outlook and placed in the Exchange Online folder named Deletions, which is a sub-folder of Recoverable Items. Any content in Recoverable Items folder in Exchange Online is, by definition, considered soft-deleted.

You have, by default, 14 days to recover soft-deleted mail. The service administrator can change the retention period to a maximum of 30 days. Be aware that this can consume some of the storage capacity assigned to each user account and you could get charged for overages.

How items become soft-deleted

There are three ways to soft-delete mail or other Outlook items.

  1. Delete an item already in the Deleted Items folder. When you manually delete something that is already in the Deleted Items folder, the item is soft-deleted. Any process, manual or otherwise that deletes content from this folder results in a ‘soft-delete’
  1. Pressing Shift + Delete on an email in your Outlook Inbox will bring up a dialog box asking if you wish to “permanently” delete the email. Clicking Yes will remove the email from the Deleted-Items folder but only perform a soft-delete. You can still recover the item if you do so within the 14 day retention period.

Soft Deleting Items in Outlook

  1. The final way items can be soft-deleted is by using Outlook policies or rules. By default, there are no policies that will automatically remove mail from the Deleted-Items folder in Outlook. However, users can create rules that ‘permanently’ (soft-delete) email. If you’re troubleshooting missing email, have the user check for such rules as shown below. You can click Rules on the Home menu and examine any created rules in the Rules Wizard shown below.

Microsoft Outlook Policies and Rules

Note that the caution is a bit misleading as the rule’s action will soft-delete the email, which, as already stated, is not an immediate permanent deletion.

Recovering soft-deleted mail

You can recover soft-deleted mail directly in Outlook. Be sure the Deleted Items folder is selected, then look for “Recover items recently removed from this folder at the top of the mail column, or the “Recover Deleted Items from Server” action on the Home menu bar.

Recovering soft-deleted mail in Outlook

Clicking on the recover items link opens the Recover Deleted Items window.

Recover Deleted Items, Microsoft Outlook

Click on the items you want to recover or Select All, and click OK.

NOTE: The recovered email returns to your Deleted Items folder. Be sure to move it into your Inbox.

If the email you’re looking for is not listed, it could have moved to the next stage: ‘Hard-Deleted.’

While users can recover soft-deleted email, Administrators can also recover soft-deleted email on their behalf using the ‘Hard-Deleted’ email recovery process described next (which works for both hard and soft deletions). Also, Microsoft has created two PowerShell commands very useful in this process for those who would rather script the tasks. You can use the Get-RecoverableItems and Restore-RecoverableItems cmdlets to search and restore soft-deleted email.

Hard-Deleted Email

The next stage for deletion is ‘Hard Delete.’ Technically, items are hard deleted when items moved from the Recoverable folder to the Purges folder in Exchange online. Administrators can still recover items in the folder with the recovery period set by policy which ranges from 14 (the default) to 30 (the maximum). You can extend the retention beyond 30 days by placing legal or litigation hold on the item or mailbox.

How items become Hard-Deleted

There are two ways content becomes hard-deleted.

  1. By policy, soft-deleted email is moved to the hard-deleted stage when the retention period expires.
  2. Users can hard-delete mail manually by selecting the Purge option in the Recover Deleted Items window shown above. (Again, choosing to ‘permanently delete’ mail with Shift + Del, results in a soft-delete, not a hard-delete.)

Recovering Hard-Deleted Mail

Once email enters the hard-delete stage, users can no longer recover the content. Only service administrators with the proper privileges can initiate recovery, and no administrators have those privileges by default, not even the global admin. The global admin does have the right to assign privileges so that they can give themselves (or others) the necessary rights. Privacy is a concern here since administrators with these privileges can search and export a user’s email.

Microsoft’s online documentation Recover deleted items in a user’s mailbox details the step-by-step instructions for recovering hard-deleted content. The process is a bit messy compared to other administrative tasks. As an overview, the administrator will:

  1. Assign the required permissions
  2. Search the Inbox for the missing email
  3. Copy the results to a Discovery mailbox where you can view mail in the Purged folder (optional).
  4. Export the results to a PST file.
  5. Import the PST to Outlook on the user’s system and locate the missing email in the Purged folder

Last Chance Recovery

Once hard-deleted items are purged, they are no longer discoverable by any method by users or administrators. You should consider the recovery of such content as unlikely. That said, if the email you are looking for is not recoverable by any of the above methods, you can open a ticket with Microsoft 365 Support. In some circumstances, they may be able to find the email that has been purged but not yet overwritten. They may or may not be willing to look for the email, but it can’t hurt to ask, and it has happened.

What about using Outlook to backup email?

Outlook does allow a user to export email to a PST file. To do this, click File” in the Outlook main menu, then “Import & Export” as shown below.

Outlook Menu, Import Export

You can specify what you want to export and even protect the file with a password.

While useful from time to time, a backup plan that depends on users manually exporting content to a local file doesn’t scale and isn’t reliable. Consequently, don’t rely on this as a possible backup and recovery solution.

Alternative Strategies

After reading this, you may be thinking, “isn’t there an easier way?” A service like Altaro Office 365 Backup allows you to recover from point-in-time snapshots of an inbox or other Microsoft 365 content. Having a service like this when you get that urgent call to recover a mail from a month ago can be a lifesaver.


Users can recover most deleted email without administrator intervention. Often, deleted email simply sits in the Deleted folder until manually cleared. When that occurs, email enters the ‘soft-deleted stage,’ and is easily restored by a user within 14-days. After this period, the item enters the ‘hard-deleted’ state. A service administrator can recover hard-deleted items within the recovery window. After the hard-deleted state, email should be considered uncoverable. Policies can be applied to extend the retention times of deleted mail in any state. While administrators can go far with the web-based administration tools, the entire recovery process can be scripted with PowerShell to customize and scale larger projects or provide granular discovery. It is always a great idea to use a backup solution designed for Microsoft 365, such as Altaro Office 365 Backup.

Finally, if you haven’t done so already, remember to save your seat on our upcoming must-attend webinar for all Microsoft 365 admins:

Critical Security Features in Office/Microsoft 365 Admins Simply Can’t Ignore

Is Your Office 365 Data Secure?

Did you know Microsoft does not back up Office 365 data? Most people assume their emails, contacts and calendar events are saved somewhere but they’re not. Secure your Office 365 data today using Altaro Office 365 Backup – the reliable and cost-effective mailbox backup, recovery and backup storage solution for companies and MSPs. 

Start your Free Trial now

Go to Original Article
Author: Brett Hill

R.I.P. Office 365, Long Live Microsoft 365

Microsoft just made sweeping changes to the Office 365 ecosystem, both for personal subscriptions (Office 365 Personal and Home) and Office 365 for Business, sunsetting the Office 365 brand and replacing it with Microsoft 365. This was put in place as of April 21, 2020.

This article will look at what these changes mean, explore the differences between Office 365, Microsoft 365 and Office 2019 and the subscription model underlying these offerings as well as make some predictions for the enterprise services that are still under the Office 365 name.

Office 365 Home and Personal

Let’s start with the home and family subscriptions. Over 500 million people use the free, web-based versions of Word, Excel etc. along with Skype and OneDrive to collaborate and connect. Then there are 38 million people who have subscribed to Office 365 Home or Office 365 Personal. Both provide the desktop Office suite (Word, Excel etc.) for Windows and Mac, along with matching applications for iOS and Android and 1 TB of OneDrive space. These two plans are changing name to Microsoft 365 Personal ($6.99 per month) and Microsoft 365 Family ($9.99 per month) respectively. Personal is for a single user whereas Family works with up to six people (and yes, they each get 1 TB of OneDrive storage for a maximum of 6TB). Otherwise, they’re identical and provide advanced spelling, grammar and style assistance in Microsoft Editor (see below), AI-powered suggestions for design in PowerPoint, coaching when you rehearse a PowerPoint presentation and the new Money in Excel (see below). Each user also gets 50 GB of email storage in Outlook, the ability to add a custom email domain and 60 minutes worth of Skype calls to mobiles and landlines.

Office 365 Microsoft 365 Plan Choices

Picking a plan for home use is easy

Microsoft Editor is Microsoft’s answer to Grammarly and is available in Word on the web, and the desktop Word version, along with as well as an Edge or Chrome extension. It supports more than 20 languages and uses AI to help you with the spelling, grammar, and style of your writing. The basic version is available to anyone, but the advanced features are unlocked with a Personal or Family subscription. These include suggestions for how to write something more clearly (just highlight your original sentence), plagiarism checking and the ability to easily insert citations and suggestions for improving conciseness and inclusiveness.

Settings for the Microsoft Editor browser extension

Settings for the Microsoft Editor browser extension

Money in Excel connects Excel to your bank and credit card accounts so you can import balances and transactions automatically and provides personalized insights on your spending habits. Money isn’t available yet and will be US only in the first phase when it rolls out over the next couple of months.

Outlook on the web will let you add personal calendars, not only marrying your work and home life but also providing clarity for others seeking to find appointment times with you – of course, they won’t see what’s penned in your calendars, only when you’re not available. Play My Emails is coming to Android (already available on iOS), letting Cortana read your emails to you while you’re on the go. The Teams mobile app is being beefed up for use in your personal life as well. Finally, Microsoft Family Safety is coming to Android and iOS devices, helping parents protect their children when they explore and play games on their devices.

You’ll have noticed that nearly all of these new features and services are on the horizon but not here yet. If you’re already an Office 365 Home or Personal subscriber your subscription just changed its name to Microsoft 365 Family or Personal but nothing else changed and until these new goodies are available – nothing has changed, including the price of your subscription. Note that none of these changes applies to the perpetual licenses Office 2019 which is Word, Excel etc. that you can purchase (not subscribe to) and that Office 2019 doesn’t provide any cloud-powered, AI-based features, nor gets the monthly feature updates that its Office 365 based cousin enjoys.

Microsoft 365 Business Basic, Apps, Standard and Premium

Of more interest to readers of Altaro’s blogs are probably the changes to the Office 365 SMB plans (that top out at 300 users). As a quick summary, (for a more in-depth look at Office & Microsoft 365, here’s a free eBook from Altaro) Microsoft 365 Business Basic (formerly known as Office 365 Business Essentials at $5 per user per month) gives each user an Exchange mailbox, Teams and SharePoint access, the web browser versions of Word, Excel etc. and 1TB of OneDrive storage.

Microsoft 365 Apps for Business (old name Office 365 Business, $8.25 per user per month) provides the desktop version of Office for Windows, Mac, Android, and iOS devices and 1TB of OneDrive storage.

Microsoft 365 Business Standard (prior name Office 365 Business Premium which is a name change that won’t confuse anyone weighs in at $12.50 per user per month) gives you both the desktop and web versions of Office.

Finally, Microsoft 365 Business Premium (formerly known as Microsoft 365 Business, again not confusing at all, at $20 per user per month) gives you everything in Standard, plus Office 365 Advanced Threat Protection, Intune based Mobile Device Management (MDM) features, Online Archiving in Exchange and much more.

Microsoft 365 Management Portal

Microsoft 365 Management Portal

In a separate announcement, Microsoft is bringing the full power of AAD Premium P1 for free to Microsoft 365 Business Premium. This will give SMBs cost-effective access to Cloud App Discovery which provides insight and protection for users in the modern world of cloud services, including discovering which applications your staff are using. It’ll also bring Application Proxy to be able to publish on-premises applications to remote workers easily and securely, dynamic groups make it easier to make sure staff are in the right groups for their role, and password-less authentication using Windows Hello for Business, FIDO 2 security keys and Microsoft’s free authenticator app.

Note that none of the Enterprise flavors of Office 365, E1, E3 and E5, F1 for first-line workers, the A1, A3 and A5 for education, nor the G1, G3 and G5 varieties for government organizations are changing at this time. My prediction is that this will change and before long, all of these will be moved to the unifying Microsoft brand.

Philosophically there are a few things going on here. As a consultant who both sells and supports Office / Microsoft 365 to businesses, as well as a trainer who teaches people about the services, there’s always been a pretty clear line between the two. Office 365 gives you the Office applications, email and document storage. If you wanted mobile device management (Intune), advanced security features (Azure Active Directory, AAD), Windows 10 Enterprise and Information Protection you went for Microsoft 365. These features are all available under the moniker Enterprise Mobility + Security (EMS) so essentially Microsoft 365 was Office 365 + EMS.

Adding Microsoft 365 Licenses

Adding Microsoft 365 licenses

This line is now being blurred for the small business plans which can make it even more difficult to make sure that small and medium businesses pick the right plans for their needs. Remember though that you can mix and match the different flavors in business, just because some users need Microsoft 365 Business Premium doesn’t mean that other roles in your business can’t work well with just Microsoft 365 Business Basic.

And this isn’t a surprise move, even Office 365 administrators have been using the Microsoft 365 management portal for quite some time, here’s a screenshot of the old, retired Office 365 portal.

Office 365 Admin Center

Office 365 Admin Center

More broadly though I think the brand changes are signalling that Office 365 is “growing up” and using the same name across the home user stack as well as the SMB stack (with the Enterprise SKUs to follow) provides a more homogenous offering.

Just as with the name changes to the personal plans there’s nothing for IT administrators to do at this stage, the plans will seamlessly change names but all functionality remains the same (including the lack of long term backup, something that Altaro has a remedy for).

Go to Original Article
Author: Paul Schnackenburg

RTO and RPO: Understanding Disaster Recovery Times

You will focus a great deal of your disaster recovery planning (and rightly so) on the data that you need to capture. The best way to find out if your current strategy does this properly is to try our acid test. However, backup coverage only accounts for part of a proper overall plan. Your larger design must include a thorough model of recovery goals, specifically Recovery Time Objective (RTO) and Recovery Point Objective (RPO).

Ideally, a restore process would contain absolutely everything. Practically, expect that to never happen. This article explains the risks and options of when and how quickly operations can and should resume following systems failure.

Table of Contents

Disaster Recovery Time in a Nutshell

What is Recovery Time Objective?

What is Recovery Point Objective?

Challenges Against Short RTOs and RPOs

RTO Challenges

RPO Challenges

Outlining Organizational Desires

Considering the Availability and Impact of Solutions

Instant Data Replication

Short Interval Data Replication

Ransomware Considerations for Replication

Short Interval Backup

Long Interval Backup

Ransomware Considerations for Backup

Using Multiple RTOs and RPOs

Leveraging Rotation and Retention Policies

Minimizing Rotation Risks

Coalescing into a Disaster Recovery Plan

Disaster Recovery Time in a Nutshell

If a catastrophe strikes that requires recovery from backup media, most people will first ask: “How long until we can get up and running?” That’s an important question, but not the only time-oriented problem that you face. Additionally, and perhaps more importantly, you must ask the question: “How much already-completed operational time can we afford to lose?” The business-continuity industry represents the answers to those question in the acronyms RTO and RPO, respectively.

What is Recovery Time Objective?

Your Recovery Time Objective (RTO) sets the expectation for the answer to, “How long until we can get going again?” Just break the words out into a longer sentence: “It is the objective for the amount of time between the data loss event and recovery.”

Recovery Time Objective RTO

Of course, we would like to make all of our recovery times instant. But, we also know that will not happen. So, you need to decide in advance how much downtime you can tolerate, and strategize accordingly. Do not wait until the midst of a calamity to declare, “We need to get online NOW!” By that point, it will be too late. Your organization needs to build up those objectives in advance. Budgets and capabilities will define the boundaries of your plan. Before we investigate that further, let’s consider the other time-based recovery metric.

What is Recovery Point Objective?

We don’t just want to minimize the amount of time that we lose; we also want to minimize the amount of data that we lose. Often, we frame that in terms of retention policies — how far back in time we need to be able to access. However, failures usually cause a loss of systems during run time. Unless all of your systems continually duplicate data as it enters the system, you will lose something. Because backups generally operate on a timer of some sort, you can often describe that potential loss in a time unit, just as you can with recovery times. We refer to the maximum total acceptable amount of lost time as a Recovery Point Objective (RPO).

Recovery Point Objective RPO

As with RTOs, shorter RPOs are better. The shorter the amount of time since a recovery point, the less overall data lost. Unfortunately, reduced RPOs take a heavier toll on resources. You will need to balance what you can achieve against what your business units want. Allow plenty of time for discussions on this subject.

Challenges Against Short RTOs and RPOs

First, you need to understand what will prevent you from achieving instant RTOs and RPOs. More importantly, you need to ensure that the critical stakeholders in your organization understand it. These objectives mean setting reasonable expectations for your managers and users at least as much as they mean setting goals for your IT staff.

RTO Challenges

We can define a handful of generic obstacles to quick recovery times:

  • Time to acquire, configure, and deploy replacement hardware
  • Effort and time to move into new buildings
  • Need to retrieve or connect to backup media and sources
  • Personnel effort
  • Vendor engagement

You may also face some barriers specific to your organization, such as:

  • Prerequisite procedures
  • Involvement of key personnel
  • Regulatory reporting

Make sure to clearly document all known conditions that add time to recovery efforts. They can help you to establish a recovery checklist. When someone requests a progress report during an outage, you can indicate the current point in the documentation. That will save you time and reduce frustration.

RPO Challenges

We could create a similar list for RPO challenges as we did for RTO challenges. Instead, we will use one sentence to summarize them all: “The backup frequency establishes the minimum RPO”. In order to take more frequent backups, you need a fast backup system with adequate amounts of storage. So, your ability to bring resources to bear on the problem directly impacts RPO length. You have a variety of solutions to choose from that can help.

Outlining Organizational Desires

Before expending much effort figuring out what you can do, find out what you must do. Unless you happen to run everything, you will need input from others. Start broadly with the same type of questions that we asked above: “How long can you tolerate downtime during recovery?” and “How far back from a catastrophic event can you re-enter data?” Explain RTOs and RPOs. Ensure that everyone understands that RPO means recent a loss of recent data, not long-term historical data.

These discussions may require a fair bit of time and multiple meetings. Suggest that managers work with their staff on what-if scenarios. They can even simulate operations without access to systems. For your part, you might need to discover the costs associated with solutions that can meet different RPO and RTO levels. You do not need to provide exact figures, but you should be ready and able to answer ballpark questions. You should also know the options available at different spend levels.

Considering the Availability and Impact of Solutions

To some degree, the amount that you spend controls the length of your RTOs and RPOs. That has limits; not all vendors provide the same value per dollar spent. But, some institutions set out to spend as close to nothing as possible on backup. While most backup software vendors do offer a free level of their product, none of them makes their best features available at no charge. Organizations that try to spend nothing on their backup software will have high RTOs and RPOs and may encounter unexpected barriers. Even if you find a free solution that does what you need, no one makes storage space and equipment available for free. You need to find a balance between cost and capability that your company can accept.

To help you understand your choices, we will consider different tiers of data protection.

Instant Data Replication

For the lowest RPO, only real-time replication will suffice. In real-time replication, every write to live storage is also written to backup storage. You can achieve this many ways, but the most reliable involve dedicated hardware. You will spend a lot, but you can reduce your RPO to effectively zero. Even a real-time replication system can drop active transactions, so never expect a complete shield against data loss.

Real-time replication systems have a very high associated cost. For the most reliable protection, they will need to span geography as well. If you just replicate to another room down the hall and a fire destroys the entire building, your replication system will not save you. So, you will need multiple locations, very high speed interconnects, and capable storage systems.

Short Interval Data Replication

If you can sustain a few minutes of lost information, then you usually find much lower price tags for short-interval replication technology. Unlike real-time replication, software can handle the load of delayed replication, so you will find more solutions. As an example, Altaro VM Backup offers Continuous Data Protection (CDP), which cuts your RPO to as low as five minutes.

As with instant replication, you want your short-interval replication to span geographic locations if possible. But, you might not need to spend as much on networking, as the delays in transmission give transfers more time to complete.

Ransomware Considerations for Replication

You always need to worry about data corruption in replication. Ransomware adds a new twist but presents the same basic problem. Something damages your real-time data. None-the-wiser, your replication system makes a faithful copy of that corrupted data. The corruption or ransomware has turned both your live data and your replicated data into useless jumbles of bits.

Anti-malware and safe computing practices present your strongest front-line protection against ransomware. However, you cannot rely on them alone. The upshot: you cannot rely on replication systems alone for backup. A secondary implication: even though replication provides very short RPOs, you cannot guarantee them.

Short Interval Backup

You can use most traditional backup software in short intervals. Sometimes, those intervals can be just, or nearly, as short as short-term replication intervals. The real difference between replication and backup is the number of possible copies of duplicated data. Replication usually provides only one copy of live data — perhaps two or three at the most — and no historical copies. Backup programs differ in how many unique simultaneous copies that they will make, but all will make multiple historical copies. Even better, historical copies can usually exist offline.

You do not need to set a goal of only a few minutes for short interval backups. To balance protection and costs, you might space them out in terms of hours. You can also leverage delta, incremental, and differential backups to reduce total space usage. Sometimes, your technologies have built-in solutions that can help. As an example, SQL administrators commonly use transaction log backups on a short rotation to make short backups to a local disk. They perform a full backup each night that their regular backup system captures. If a failure occurs during the day that does not wipe out storage, they can restore the previous night’s full backup and replay the available transaction log backups.

Long Interval Backup

At the “lowest” tier, we find the oldest solution: the reliable nightly backup. This usually costs the least in terms of software licenses and hardware. Perhaps counter-intuitively, it also provides the most resilient solution. With longer intervals, you also get longer-term storage choices. You get three major benefits from these backups: historical data preservation, protection against data corruption, and offline storage. We will explore each in the upcoming sections.

Ransomware Considerations for Backup

Because we use a backup to create distinct copies, it has some built-in protection against data corruption, including ransomware. As long as the ransomware has no access to a backup copy, it cannot corrupt that copy. First and foremost, that means that you need to maintain offline backups. Replication requires essentially constant continuity to its replicas, so only backup can work under this restriction. Second, it means that you need to exercise caution around restores when you execute restore procedures. Some ransomware authors have made their malware aware of several common backup applications, and they will hijack it to corrupt backups whenever possible. You can only protect your offline data copies by attaching them to known-safe systems.

Using Multiple RTOs and RPOs

You will need to structure your systems into multiple RTO and RPO categories. Some outages will not require much time to recover from. Some will require different solutions. For instance, even though we tend to think primarily in terms of data during disaster recovery planning, you must consider equipment as well. For instance, if your sales division prints its own monthly flyers and you lose a printer, then you need to establish, RTOs, RPOs, downtime procedures, and recovery processes just for those print devices.

You also need to establish multiple levels for your data, especially when you have multiple protection systems. For example, if you have both replication and backup technologies in operation, then you will set one RPO/RTO value for times when the replication works, and RTO/RPO values for when you must resort to long-term backup. That could happen due to ransomware or some other data corruption event, but it can also happen if someone accidentally deletes something important.

To start this planning, establish “Best Case” and “Worst Case” plans and processes for your individual systems.

Leveraging Rotation and Retention Policies

For your final exercise in time-based disaster recovery designs, we will look at rotation and retention policies. “Rotation” comes from the days of tape backups, when we would decide how often to overwrite old copies of data. Now that high-capacity external disks have reached a low-cost point, many businesses have moved away from tape. You may not overwrite media anymore, or at least not at the same frequency. Retention policies dictate how long you must retain at least one copy of a given piece of information. These two policies directly relate to each other.

Backup Rotation and Retention

In today’s terms, think of “rotation” more in terms of unique copies of data. Backup systems have used “differential” and “incremental” backups for a very long time. The former is a complete record of changes since the last full backup; the latter is a record of changes since the last backup of any kind. Newer backup copies have “delta” and deduplication capabilities. A “delta” backup operates like a differential or incremental backup, but within files or blocks. Deduplication keeps only one copy of a block of bits, regardless of how many times it appears within an entire backup set. These technologies reduce backup time and storage space needs… at a cost.

Minimizing Rotation Risks

All of these speed-enhancing and space-reducing improvements have one major cost: they reduce the total number of available unique backup copies. As long as nothing goes wrong with your media, then this will never cause you a problem. However, if one of the full backups suffer damage, then that invalidates all dependent partial backups. You must balance the number of full backups that you take against the amount of time and bandwidth necessary to capture them.

As one minimizing strategy, target your full backup operations to occur during your organization’s quietest periods. If you do not operate 24 hours per day, that might allow for nightly full backups. If you have low volume weekends, you might take full backups on Saturdays or Sundays. You can intersperse full backups on holidays.

Coalescing into a Disaster Recovery Plan

As you design your disaster recovery plan, review the sections in this article as necessary. Remember that all operations require time, equipment, and personnel. Faster backup and restore operations always require a trade-off of expense and/or resilience. Modest lengthening of allowable RTOs and RPOs can result in major cost and effort savings. Make certain that the key members of your organization understand how all of these numbers will impact them and their operations during an outage.

If you need some help defining RTO and RPO in your organization, let me know in the comments section below and I will help you out!

Go to Original Article
Author: Eric Siron

The Acid Test for Your Backup Strategy

For the first several years that I supported server environments, I spent most of my time working with backup systems. I noticed that almost everyone did their due diligence in performing backups. Most people took an adequate responsibility to verify that their scheduled backups ran without error. However, almost no one ever checked that they could actually restore from a backup — until disaster struck. I gathered a lot of sorrowful stories during those years. I want to use those experiences to help you avert a similar tragedy.

Successful Backups Do Not Guarantee Successful Restores

Fortunately, a lot of the problems that I dealt with in those days have almost disappeared due to technological advancements. But, that only means that you have better odds of a successful restore, not that you have a zero chance of failure. Restore failures typically mean that something unexpected happened to your backup media. Things that I’ve encountered:

  • Staff inadvertently overwrote a full backup copy with an incremental or differential backup
  • No one retained the necessary decryption information
  • Media was lost or damaged
  • Media degraded to uselessness
  • Staff did not know how to perform a restore — sometimes with disastrous outcomes

I’m sure that some of you have your own horror stories.

These risks apply to all organizations. Sometimes we manage to convince ourselves that we have immunity to some or all of them, but you can’t get there without extra effort. Let’s break down some of these line items.

People Represent the Weakest Link

We would all like to believe that our staff will never make errors and that the people that need to operate the backup system have the ability to do so. However, as a part of your disaster recovery planning, you must expect an inability to predict the state or availability of any individual. If only a few people know how to use your backup application, then those people become part of your risk profile.

You have a few simple ways to address these concerns:

  • Periodically test the restore process
  • Document the restore process and keep the documentation updated
  • Non-IT personnel need knowledge and practice with backup and restore operations
  • Non-IT personnel need to know how to get help with the application

It’s reasonable to expect that you would call your backup vendor for help in the event of an emergency that prevented your best people from performing restores. However, in many organizations without a proper disaster recovery plan, no one outside of IT even knows who to call. The knowledge inside any company naturally tends to arrange itself in silos, but you must make sure to spread at least the bare minimum information.

Technology Does Fail

I remember many shock and horror reactions when a company owner learned that we could not read the data from their backup tapes. A few times, these turned into grief and loss counselling sessions as they realized that they were facing a critical — or even complete — data loss situation. Tape has its own particular risk profile, and lots of businesses have stopped using it in favour of on-premises disk-based storage or cloud-based solutions. However, all backup storage technologies present some kind of risk.

In my experience, data degradation occurred most frequently. You might see this called other things, my favourite being “bit rot”. Whatever you call it, it all means the same thing: the data currently on the media is not the same data that you recorded. That can happen just because magnetic storage devices have susceptibilities. That means that no one made any mistakes — the media just didn’t last. For all media types, we can establish an average for failure rates. But, we have absolutely no guarantees on the shelf life for any individual unit. I have seen data pull cleanly off decade-old media; I have seen week-old backups fail miserably.

Unexpectedly, newer technology can make things worse. In our race to cut costs, we frequently employ newer ways to save space and time. In the past, we had only compression and incremental/differential solutions. Now, we have tools that can deduplicate across several backup sets and at multiple levels. We often put a lot of reliance on the single copy of a bit.

How to Test your Backup Strategy

The best way to identify problems is to break-test to find weaknesses. Leveraging test restores will help identity backup reliability and help you solve these problems. Simply, you cannot know that you have a good backup unless you can perform a good restore. You cannot know that your staff can perform a restore unless they perform a restore. For maximum effect, you need to plan tests to occur on a regular basis.

Some tools, like Altaro VM Backup, have built-in tools to make tests easy. Altaro VM Backup provides a “Test & Verify Backups” wizard to help you perform on-demand tests and a “Schedule Test Drills” feature to help you automate the process.

how to test and verify backups altaro

If your tool does not have such a feature, you can still use it to make certain that your data will be there when you need it. It should have some way to restore a separate or redirected copy. So, instead of overwriting your live data, you can create a duplicate in another place where you can safely examine and verify it.

Test Restore Scenario

In the past, we would often simply restore some data files to a shared location and use a simple comparison tool. Now that we use virtual machines for so much, we can do a great deal more. I’ll show one example of a test that I use. In my system, all of these are Hyper-V VMs. You’ll have to adjust accordingly for other technologies.

Using your tool, restore copies of:

  • A domain controller
  • A SQL server
  • A front-end server dependent on the SQL server

On the host that you restored those VMs to, create a private virtual switch. Connect each virtual machine to it. Spin up the copied domain controller, then the copied SQL server, then the copied front-end. Use the VM connect console to verify that all of them work as expected.

Create test restore scenarios of your own! Make sure that they match a real-world scenario that your organization would rely on after a disaster.

Go to Original Article
Author: Eric Siron

Remote Work: Top 5 Challenges (and Solutions) for IT Admins

The devastating effects of the COVID-19 pandemic are plain for all to see. Our sympathies go out to everyone struggling to cope with this tragic situation. We are truly indebted to the health workers around the world who are working tirelessly to protect us. But with most countries now restricting movements and public gatherings, perhaps the unsung heroes of the current situation from an economic perspective are the IT admins who are keeping the world’s workforce productive.

Virtually every industry has been impacted by the global pandemic in some waybut the businesses that are fortunate enough to be able to continue operations remotely have to adapt – and quickly. Larger enterprises have been doing this for years and will already have a robust remote work infrastructure in placeBut for those organizations which are transitioning to this new work dynamic, it introduces many new challenges which IT departments are expected to solveThis article will review the top 5 challenges facing IT admins managing remote operations, along with solutions from Microsoft and other service providers. 

1. Drops in Productivity and Business Continuity 

One of the first things that your company will see is a drop in productivity as business operations slow down. This will likely be a combination of remote work challenges due to technology and personal reasons. When people are at home, they are generally more distracted, and they may even be taking care of their children during business hours

Companies should expect these challenges and realize that this could also change work habits. For example, productivity could spike in the early mornings and evenings when children are usually sleepingEncourage managers to ask their staff about any changes in employee behaviour so that the IT department can be prepared. Monitoring Office 365 activity is possible via usage reports:

Monitoring user activity in Microsoft 365

Office 365 Usage Executive Summary

All your staff should have laptops with your company’s line of business applications and productivity suite, which could include Microsoft Office 365 or Google G Suite. While most of your employees will be familiar with products like Word and Excel, explore whether using SharePoint for content collaboration or OneNote for shared workspaces will help different business units.

It probably goes without saying, but make sure that you are using a reliable virtual communication or meeting tool, such as Microsoft Teams or Google HangoutsEncourage managers to move all their regular in-person meetings to the virtual format to keep their team on track. 

If you haven’t already, provide a password-protected webpage which employees can visit containing a current version of all remote work guidelines, documentation, service notifications and a place to submit help requests. This is particularly important during a transition period with a lot of new remote users. Make sure guidelines are properly followed as this is one of the most important stages to be fully aware of vulnerability issues and to minimize risk.

Microsoft System Center Service Manager (SCSM) can provide this as an ITSM solution with an online help portal. Most IT organizations will see an increase in support requests, so consider redirecting more of your staff towards customer support. IT will be critical in keeping the business running, so having quick customer response time is critical.

It is important to set clear expectations with your business stakeholders that there may be disruptions to IT services during the transition phase and encourage teams to have backup communication plans. This could be as simple as sharing each other’s phone numbers to ensure business continuity in case of an unplanned outage. You want to make the transition as easy as possible so that you can be a hero of your company.

2. Managing Network and Connectivity Issues

Since the coronavirus outbreak started, cities throughout the world have seen a 10 to 40% increase in Internet traffic. This is due to more people working from home and an increased number of children and others in isolation viewing more internet content during business hours from the same building. This has caused many disruptions throughout the internet, however, Internet Service Providers (ISPs) have been scaling up their infrastructure to support this growth.

While you cannot change these internet-wide problems we are all facing, you can provide your workforce with some best practices to help them address connectivity issues. Remind them that any network usage within their buildingincluding streaming TV, video games or music, will reduce their bandwidth

Some ISPs and cellular providers are offering their residential and commercial customers discounted service upgrades and no data overage charges – check with your company internet provider if they offer this as soon as possible. Furthermore, compile a list of these organizations for your employees. If regular Internet access becomes too slow, remind them that they may be able to set up a mobile hotspot and tether their laptop to their cell phone’s Internet connection.

Once your staff has remotely connected to your infrastructure there are different ways that you can optimize the network traffic. First, scale-out your network hardware. If you are running your services in a public or private cloud, you can take advantage of network virtualization and network function virtualization (NFV) by deploying virtualized routers, switches and load balancers.

If your datacenter is using physical networking hardware, you may want to invest in additional equipment, but be wary of delays if you have to ship anything internationally. If you are expecting an increase in remote users, also consider that you may need to increase the capacity of your entire infrastructure, including virtual machines and storage network throughput 

You should also prioritize network traffic so that requests for business-critical services or VoIP communications are more likely to go through. Network prioritization, or Quality of Service (QoS), can be controlled from various points in your physical and virtual networks. If you are using Hyper-V, you can use storage QoS to prioritize disk access to important dataYou can also learn additional virtual networking best practices from Altaro.

Make sure you have a backup plan in place for different types of outages. Some organizations have deployed a redundant copy of their services in a secondary site or public cloud so you may want to consider turning these resources on to support the extra demand. However, cloud providers are not immune to outages (more about that in Protecting User Data) so you should always have a backup plan in place especially for communications services, a PABX (e.g. 3CX) for Chat & Video outages, and Email continuity solutions (like those of Proofpoint & Mimecast) for email outages. 

If you do experience a service drop, it may not be immediately reported by the service provider as they often prefer to have an official reason and solution in place before they communicate it. is a great way to quickly check if it is a genuine service drop or it’s just your connection.

monitor app downtime with downdetector

3. Enabling Access to Remote Resources 

Once your users have network access, you want to restrict or grant access to specific resources. Windows Server Active Directory for role-based access control (RBAC) is just the start – you should create groups in Active Directory and assign different users to eachConfigure and manage security policies at the group level to simplify administration when users join or leave the organization. This is also where you can implement VPN restrictions (read more in Ensuring High Security Standards below) 

Some of your users may need to access the operating system of a server, which can be provided through Microsoft Remote Desktop (RDP)This application allows the user to connect directly to a workstation or virtual machine to get access to all the services running on it. Make sure that you provide guidanceideally with graphical step-by-step instructions, for any new processes. Ensure that all employees have a clear escalation path to tech support.

4. Ensuring High Security Standards

In addition to using Active Directory with RBAC, you should take time to deploy security best practices for remote access. This starts with education, making sure that employees are using private Internet connections or connecting via a VPN if they access your services via a public network. Also, ensure that you are following any industry-specific security requirements if you are working with sensitive data or in a regulated industry.

You should already have deployed a strong firewall which you can manage remotely. If the firewall is virtualized as a network function virtualization (NFV) device, then it can be dynamically updated and scaled. By default, disable all inbound and outbound traffic with a “zero trust” security model, and only allow access to specific protocols and ports that you are intentionally using. It is essential to turn Multi-Factor Authentication on for Office 365 access to ensure that remote users go through extra validation via clicking on an email link or entering a code sent to their mobile device.  

For advanced control, use Azure Conditional Access which lets IT departments dynamically grant and revoke access based on different variables. A good example of this is using conditional policies to restrict access from certain counties if you know you shouldn’t have any users logging on from those regions. 

Take advantage of any native security tools offered by your cloud provider, such as Azure Security Center. Portals like this provide an IT department with best practices and reports and use AI-based analytics to find anomalies or unusual traffic patterns.

For more information about Azure Security Center, watch our on-demand webinar Azure Security Center: How to Protect Your Datacenter with Next Generation Security

Azure Security Center webinar

5. Protecting User Data

Now that all of your employees are working remotely from laptops, you want to ensure that their data is backed up and can quickly be recovered. The easiest option is to have your employees place their files on cloud storage, such as Microsoft OneDrive for BusinessMicrosoft SharePoint or Dropbox. This helps by providing a centralized location so that even if the user loses their laptop, a copy of the data is preserved

However, these copies are effectively redundant copies which are not a replacement for backup. These redundant copies enable shared documents to function with approved users able to access the document for collaboration. But this data is constantly rewritten and thus not a genuine alternative to stored backup copies.

If your company is using Microsoft O365 for email, then one copy of that user’s mailbox is available in the cloud. Keep in mind that while the email is always accessible, only one copy of all O365 data is stored by Microsoft. However again, this is effectively redundant data. Thus, the native backup provided by Microsoft will not be enough for most companies considering that with more people working from home, the chances of software outages and blackouts increase due to the strain from supporting the wave of new users.  

This was all too real to users who experienced wide-spread blackouts of Teams and Exchange Online on Tuesday 17 March. 

Microsoft Teams service health

Office 365 Service Health on Tuesday 17 March 2020  

One way to prepare for these events is to have alternative options for the key apps to continue operations and business continuity (discussed above). However, these events also prove that Microsoft is not infallible, and proper backup is essential to ensure you don’t lose the vital data your company relies upon. Therefore, you should also be using a solution such as Altaro Office 365 Backup to create a reliable and secure backup of every O365 mailbox, SharePoint document and OneDrive for Business file.

Altaro Office 365 Backup Logo

Start your free trial of Altaro Office 365 Backup

If your users are not automatically storing their files in the cloud, then at least make sure that their files on their laptops are being regularly and automatically backed up. You can configure global backup settings across your organization using Group Policy Management. This will force backups to be taken on every laptop at certain intervals for certain essential business applications. Consider having the backups automatically copied to a remote file storage location during off-hours.

You now know how to prepare for the top five challenges you will face as your employees start working from home. If this sounds too complicated for your IT team’s skills, consider finding a managed service provider (MSP) or Cloud Service Provider (CSP) to help you through the transition.

Keep in mind that these best practices only cover the needs of your employees. If your business offers technology services, then consider the impact of the home workforce on your line of business applications, and scale them up or down as appropriate. Using these tips, you’ll get ahead of the challenges that your organization will face as more of the staff works remotelyThese are testing times for all of us, but as an IT admin, you can now become one of your company’s heroes by keeping your business running in this new and challenging world. 

Good luck! 

Go to Original Article
Author: Symon Perriman

How to Run a Windows Failover Cluster Validation Test

Guest clustering describes an increasingly popular deployment configuration for Windows Server Failover Clusters where the entire infrastructure is virtualized. With a traditional cluster, the hosts are physical servers and run virtual machines (VMs) as their highly available workloads. With a guest cluster, the hosts are also VMs which form a virtual cluster, and they run additional virtual machines nested within them as their highly available workloads. Microsoft now recommends dedicating clusters for each class of enterprise workload, such as your Exchange Server, SQL Server, File Server, etc., because each application has different cluster settings and configuration requirements. Setting up additional clusters became expensive for organizations when they had to purchase and maintain more physical hardware. Other businesses wanted guest clustering as a cheaper test, demo or training infrastructure. To address this challenge, Microsoft Hyper-V supports “nested virtualization” which allows you to create virtualized hosts and run VMs from them, creating fully-virtualized clusters. While this solves the hardware problem, it has created new obstacles for backup providers as each type of guest cluster has special considerations.

Hyper-V Guest Cluster Configuration and Storage

Let’s first review the basic configuration and storage requirements for a guest cluster. Fundamentally a guest cluster has the same requirements as a physical cluster, including two or more hosts (nodes), a highly available workload or VM, redundant networks, and shared storage. The entire solution must also pass the built-in cluster validation tests. You should also force every virtualized cluster node to run on different physicals hosts so that if a single server fails, it will not bring down your entire guest cluster. This can be easily configured using Failover Clustering’s AntiAffinityClassNames or Azure Availability Sets, so in the event that you lose that physical server, the entire cluster will not fail. Some of the guest cluster requirements will also vary on the nested virtualized application which you are running, so always check for workload-specific requirements during your planning.

Shared storage used to be a requirement for all clusters because it allows the workload or VM to access the same data regardless of which node is running that workload. When the workload fails over to a different node, its services get restarted, then it accesses the same shared data which it was previously using. Windows Server 2012 R2 and later supports guest clusters with shared storage using a shared VHDX disk, iSCSI or virtual fibre channel. Microsoft added support for local DAS replication using storage spaces direct (S2D) within Windows Server 2016 and continued to improve S2D with the latest 2019 release.

For a guest cluster deployment guide, you can refer to the documentation provided by Microsoft to create a guest cluster using Hyper-V. If you want to do this in Microsoft Azure, then you can also follow enabling nested virtualization within Microsoft Azure.

Backup and Restore the Entire Hyper-V Guest Cluster

The easiest backup solution for guest clustering is to save the entire environment by protecting all the VMs in that set. This has almost-universal support by third party backup vendors such as Altaro, as it is essentially just protecting traditional virtual machines which have a relationship to each other. If you are using another VM as part of the set as an isolated domain controller, iSCSI target or file share witness, make sure it is backup up too.

A (guest) cluster-wide backup is also the easiest solution for scenarios where you wish to clone or redeploy an entire cluster for test, demo or training purposes by restoring it from a backup. If you are restoring a domain controller, make sure you bring this back online first. Note that if you are deploying copies of a VM, especially if one contains a domain controller, that any images have been Sysprepped to avoid conflicts by giving them new global identifiers. Also, use DHCP to get new IP addresses for all network interfaces. In this scenario, it is usually much easier to just deploy this cloned infrastructure in a full isolated environment so that the cloned domain controllers do not cause conflicts.

The downside to cluster-wide backup and restore is that you will lack the granularity to protect and recover a single workload (or item) running within the VM, which is why most admins will select another backup solution for guest clusters. Before you pick one of the alternative options, make sure that both your storage and backup vendor support this guest clustering configuration.

Backup and Restore a Guest Cluster using iSCSI or Virtual Fibre Channel

When guest clusters first became supported for Hyper-V, the most popular storage configurations were to use an iSCSI target or virtual fibre channel. iSCSI was popular because it was entirely Ethernet-based, which means that inexpensive commodity hardware could be used and Microsoft offered a free iSCSI Target server. Virtual fiber channel was also prevalent since it was the first type of SAN-based storage supported by Hyper-V guest clusters through its virtualized HBAs. Either solution works fine and most backup vendors support Hyper-V VMs running on these shared storage arrays. This is a perfectly acceptable solution for reliable backups and recovery if you are deploying a stable guest cluster. The main challenge was that in its earlier versions, Cluster Shared Volumes (CSV) disks and live migration had limited support by vendors. This meant that basic backups would work, but there were a lot of scenarios that would cause backups to fail, such as when a VM was live migrating between hosts. Most scenarios are supported in production, yet still make sure that your storage and backup vendors support and recommend it.

Backup and Restore a Guest Cluster using a Shared Virtual Hard Disk (VHDX) & VHD Set

Windows Server 2012 R2 introduced a new type of shared storage disk which was optimized for guest clustering scenarios, known as the shared virtual hard disk (.vhdx file), or Shared VHDX. This allowed multiple VMs to synchronously access a single data file which represented a shared disk (similar to a drive shared by an iSCSI Target). This disk could be used as a file share witness disk, or more commonly to store shared application data used by the workload running on the guest cluster. This Shared VHDX file could either be stored on a CSV disk or SMB file share (using a Scale-Out File Server).

This first release of a shared virtual hard disk had some limitations and was generally not recommended for production. The main criticisms were that backups were not reliable, and backup vendors were still catching up to support this new format. Windows Server 2016 addressed these issues by adding support for online resizing, Hyper-V Replica, and application-consistent checkpoints. These enhancements were released as a newer Hyper-V VHD Set (.vhds) file format. The VHD Set included additional file metadata which allowed each node to have a consistent view of that shared drive’s metadata, such as the block size and structure. Prior to this, nodes might have an inconsistent view of the Shared VHDX file structure which could cause backups to fail.

While VHD Sets was optimized to support guest clusters, there were inevitably some issues discovered which are documented by Microsoft Support. An important thing when using Shared VHDX / VHD Sets for your guest cluster is that all of your storage, virtualization, and clustering components are patched with any related hotfixes specific to your environment, including any from your storage and backup provider. Also, make sure you explicitly check that your ISVs support this updated file format and follow Microsoft’s best practices. Today this is the recommended deployment configuration for most new guest clusters.

Backup and Restore a Guest Cluster using Storage Spaces Direct (S2D)

Microsoft introduced another storage management technology in Windows Server 2016, which was improved in Windows Server 2019, known as Storage Spaces Direct (S2D). S2D was designed as a low-cost solution to support clusters without any requirement for shared storage. Instead, local DAS drives are synchronously replicated between cluster nodes to maintain a consistent state. This is certainly the easiest guest clustering solution to configure, however, Microsoft has announced some limitations in the current release (this link also includes a helpful video showing how to deploy a S2D cluster in Azure).

First, you are restricted to a 2-node or 3-node cluster only, and in either case you can only sustain the loss or outage of a single node. You also want to ensure that the disks have low latency and high performance, ideally using SSD drives or Azure’s Premium Storage managed disks. One of the major limitations still remains around backups as host-level virtual disk backups are currently not supported. If you deploy the S2D cluster, you are restricted to only taking backups from within the guest OS. Until this has been resolved and your backup vendor supports S2D, the safest option with the most flexibility will be to deploy a guest clustering using Shared VHDX / VHD Sets.


Microsoft is striving to improve guest clustering with each subsequent release. Unfortunately, this makes it challenging for third-party vendors to keep up with their support of the latest technology. It can be especially frustrating to admins when their preferred backup vendor has not yet added support for the latest version of Windows, and you should share this feedback on what you need with your ISVs. It is always a good best practice to select a vendor with close ties to Microsoft, as they get provided with early access to code and always aim to support the latest and greatest technology. The leading backup companies like Altaro are staffed by Microsoft MVPs and regularly consult with former Microsoft engineers such as myself, to support the newest technologies as quickly as possible. But always make sure that you do your homework before you deploy any of these guest clusters so you can pick the best configuration which is supported by your backup and storage provider.

Go to Original Article
Author: Symon Perriman

How to Install Hyper-V PowerShell Module

The best combination of power and simplicity for controlling Hyper-V is its PowerShell module. The module’s installable component is distinct from the Hyper-V role, and the two are not automatically installed together. Even if you have installed the free Hyper-V Server product that ships with the Hyper-V role already enabled, you’ll still need to install the PowerShell module separately. This short guide will explain how to install that module and understand its basic structure. If you need to use directly control Windows Server 2012/R2 or Hyper-V Server 2012/R2 using the PowerShell module as it ships in Windows 10/Windows Server 2016 or 2019, instructions are at the very end of this post.

How to Install the Hyper-V PowerShell Module with PowerShell

The quickest way to install the module is through PowerShell. There are several ways to do that, depending on your operating system and your goal.

Using PowerShell to Install the Hyper-V PowerShell Module in Windows 10

All of the PowerShell cmdlets for turning off and on Windows features and roles are overlays for the Deployment Image Servicing and Management (DISM) subsystem. Windows 10 does include a PowerShell module for DISM, but it uses a smaller cmdlet set than what you’ll find on your servers. The server module’s cmdlets are simpler, so I’m going to separate out the more involved cmdlets into the Windows 10 section. The cmdlets that I’m about to show you will work just as well on a server operating system as on Windows 10, although the exact names of the features that you’ll use might be somewhat different. All cmdlets must be run from an elevated PowerShell prompt.

As I mentioned in the preamble to this section, there a few different ways that you can enable the Hyper-V PowerShell module. There is only a single cmdlet, and you will only need to use it to enable a single feature. However, the module appears in a few different features, so you’ll need to pick the one that is most appropriate to you. You can see all of the available options like this:

The reason that you see so many different objects is that it’s showing a flat display of the hierarchical tree that you’d get if you opened the Windows Features window instead. Unfortunately, this cmdlet does not have a nicely formatted display (even if you don’t pare it down with any filters), so it might not be obvious. Compare the output of the cmdlet to the Windows Features screen:

You have three options if you want to install the PowerShell Module on Windows 10. The simplest is to install only the module by using its feature name. Installing either of the two options above it (Hyper-V Management Tools or the entire Hyper-V section) will include the module. I trimmed off the feature name in the images above, so all three possibilities are shown below:

Tab completion will work for everything except the specific feature name. But, don’t forget that copy/paste works perfectly well in a PowerShell window (click/drag to highlight, [Enter] to copy, right-click to paste). You can use the output from Get-WindowsOptionalFeature so that you don’t need to type any feature names.

It’s fine to install a higher-level item even if some of its subcomponents are already installed. For example, if you enabled the Hyper-V platform but not the tools, you can enable Microsoft-Hyper-V-All and it will not hurt anything.

The  Enable-WindowsOptionalFeature cmdlet does not have a ComputerName parameter, but it can be used in explicit PowerShell Remoting.

Using PowerShell to Install the Hyper-V PowerShell Module in Windows Server or Hyper-V Server 2012, 2016 & 2019

The DISM PowerShell tools on the server platforms are a bit cleaner to use than in Windows 10. If you’d like, the cmdlets shown in the Windows 10 section will work just as well on the servers (already the feature names are different). The cmdlets shown in this section will only work on server SKUs. They must be run from an elevated prompt.

Ordinarily, I don’t show cmdlets using positional parameters, but I wanted you to see how easy this cmdlet is to use. The full version of the shown cmdlet is Get-WindowsFeature -Name *hyper-v*. Its output looks much nicer than Get-WindowsOptionalFeature:

There is a difference, though. Under Windows 10, all the items live under the same root. In the Windows SKUs, Hyper-V is under the Roles branch but all of the tools are under the Features branch. The output indentation, when filtered, is misleading.

The Server SKUs have an Install-WindowsFeature cmdlet. Its behavior is similar to Enable-WindowsOptionalFeature, but it is not quite the same. Enabling the root Hyper-V feature will not automatically select all of the tools (as you might have already found out). These are all of the possible ways to install the Hyper-V PowerShell Module using PowerShell on a Server product:

Tab completion will work for everything except the specific feature name. But, don’t forget that copy/paste works perfectly well in a PowerShell window (click/drag to highlight, [Enter] to copy, right-click to paste). You can use the output from Get-WindowsFeature so that you don’t need to type any feature names.

If the Hyper-V role is already enabled, you can still use either of the last two options safely. If the Hyper-V role is not installed and you are using one of those options, the system will need to be restarted. If you like, you can include the -Restart parameter and DISM will automatically reboot the system as soon as the installation is complete.

The Install-WindowsFeature cmdlet does have a ComputerName parameter, so it can be used with implicit PowerShell Remoting to enable the feature on multiple computers simultaneously. For example, use -ComputerName svhv01, svhv02, svhv03, svhv04 to install the feature(s) on all four of the named hosts simultaneously. If you are running your PowerShell session from a Windows 10 machine that doesn’t have that cmdlet, you can still use explicit PowerShell Remoting.

How to Install the Hyper-V PowerShell Module Using the GUI

It seems a bit sacrilegious to install a PowerShell module using a GUI, and it certainly takes longer than using PowerShell, but I suppose someone has a reason.

Using the GUI to Install the Hyper-V PowerShell Module on Windows 10

Follow these steps in Windows 10:

  1. Right-click on the Start button and click Programs and Features.
  2. In the Programs and Features dialog, click Turn Windows features on or off
  3. In the Windows Features dialog, check the box for Hyper-V Module for Windows PowerShell (and anything else that you’d like) and click OK.
  4. The dialog will signal completion and the module will be installed.

Using Server Manager to Install the Hyper-V PowerShell Module on Windows Server or Hyper-V Server 2012, 2016 & 2019

Server Manager is the tool to use for graphically adding roles and features on Windows Server and Hyper-V Server systems. Of course, you’re not going to be able to directly open Server Manager on Hyper-V Server systems, but you can add a system running Hyper-V Server to the console of any same-level system running a GUI edition of Windows Server (security restrictions apply). The RSAT package for Windows 10 includes Server Manager and can remotely control servers (security restrictions apply there, as well). While Server Manager can be remotely connected to multiple systems, it can only install features on one host at a time.

To use Server Manager to enable Hyper-V’s PowerShell module, open the Add Roles and Features wizard and proceed through to the Features page. Navigate to Remote Server Administration Tools -> Role Administration Tools -> Hyper-V Management Tools and check Hyper-V Module for Windows PowerShell. Proceed through the wizard to complete the installation.

The module will be immediately available to use once the wizard completes.

A Brief Explanation of the Hyper-V PowerShell Module

Once installed, you can find the module’s files at C:WindowsSystem32WindowsPowerShellv1.0ModulesHyper-V. Its location will ensure that the module is automatically loaded every time PowerShell starts up. That means that you don’t need to use Import-Module — you can start right away with your scripting.

If you browse through and look at the files a bit, you might notice that the PowerShell module files reference a .DLL. This means that this particular PowerShell module is a binary module. Microsoft wrote it in a .Net language and compiled it. Its cmdlets will run faster than they would in a text-based module, but you won’t be able to see how it does its work (at least, not by using any sanctioned methods).

Connecting to Windows/Hyper-V Server 2012, 2016 & 2019 from PowerShell in Windows 10/Server 2016 & 2019

If you are using Windows 10 and Windows/Hyper-V Server 2016 or 2019, there’s an all-new version 2.0.0 of the Hyper-V PowerShell module. That’s a good thing, right? Well, usually. The thing is, the 2012 and 2012 R2 versions aren’t going away any time soon, and we still need to control those. Version 2 of the PowerShell module will throw an error when you attempt to control these down-level systems. The good news is that you can work around this limitation fairly easily. If you browsed the folder tree on one of these newer OS releases, you may have noticed that there is a 1.1 folder as well as a 2.0.0 folder. The earlier binary module is still included!

So, does that mean that you can happily kick off some scripts on those “old” machines? Let’s see:

The error is: “Get-VM : The Hyper-V module used in this Windows PowerShell session cannot be used for remote management of the server ‘SVHV2’. Load a compatible version of the Hyper-V module, or use Powershell remoting to connect directly to the remote server. For more information, see

What to do?

The answer lies in a new feature of PowerShell 5, which fortunately comes with these newer OSs. We will first get a look at what our options are:

You could run this without ListAvailable to determine which, if any, version is already loaded. You already know that PowerShell auto-loads the module and, if you didn’t already know, I’m now informing you that it will always load the highest available version unless instructed otherwise. So, let’s use the new RequiredVersion parameter to instruct it otherwise:

The results of this operation:

Is this good? Well, it’s OK, but not great. Popping a module in and out isn’t the worst thing in the world, but can you imagine scripting that to work against hosts of various levels? While possible, the experience would certainly be unpleasant. If you’re going to interactively control some down-level Hyper-V hosts, this approach would work well enough. For serious scripting work, I’d stick with the WMI/CIM cmdlets and explicit remoting.

If you have any questions about using the Hyper-V PowerShell module including installation, optimization or anything else, let me know in the comments below and I’ll help you out!

This blog was originally published on July 2017 but has been updated with corrections and new content to be relevant from March 2020.

Go to Original Article
Author: Eric Siron

The Complete Guide to Scale-Out File Server for Hyper-V

This article will help you understand how to plan, configure and optimize your SOFS infrastructure, primarily focused on Hyper-V scenarios.

Over the past decade, it seems that an increasing number of components are recommended when building a highly-available Hyper-V infrastructure. I remember my first day as a program manager at Microsoft when I was tasked with building my first Windows Server 2008 Failover Cluster. All I had to do was connect the hardware, configure shared storage, and pass Cluster Validation, which was fairly straightforward.

Failover Cluster with Traditional Cluster Disks

Figure 1 – A Failover Cluster with Traditional Cluster Disks

Nowadays, the recommend cluster configuration for Hyper-V virtual machines (VMs) requires adding additional management layers such as Cluster Shared Volumes (CSV), disks which must also cluster a file server to host the file path to access it, known as a Scale-Out File Server (SOFS). While the SOFS provides the fairly basic functionality of keeping a file share online, understanding this configuration can be challenging for experienced Windows Server administrators. To see the complete stack which Microsoft recommends, scroll down to see the figures throughout this article. This may appear daunting, but do not worry, we’ll explain what all of these building blocks are for.

While there are management tools like System Center Virtual Machine Manager (SCVMM) that can automate the entire infrastructure deployment, most organizations need to configure these components independently. There is limited content online explaining how Scale-Out File Server clusters work and best practices for optimizing them. Let’s get into it!

Scale-Out File Server (SOFS) Capabilities & Limitations

A SOFS cluster should only be used for specific scenarios. The following list of features have been tested and are either supported, supported but not recommended, or not supported with the SOFS.

Supported SOFS scenarios

  • File Server
    • Deduplication – VDI Only
    • DFS Namespace (DFSN) – Folder Target Server Only
    • File System
    • SMB
      • Multichannel
      • Direct
      • Continuous Availability
      • Transparent Failover
  • Other Roles
    • Hyper-V
    • IIS Web Server
    • Remote Desktop (RDS) – User Profile Disks Only
    • SQL Server
  • System Center Virtual Machine Manager (VMM)

Supported, but not recommended SOFS scenarios

  • File Server
    • Folder Redirection
    • Home Directories
    • Offline Files
    • Roaming User Profiles

Unsupported SOFS scenarios

  • File Server
    • BranchCache
    • Deduplication – General Purpose
    • DFS Namespace (DFSN) – Root Server
    • DFS Replication (DFSR)
    • Dynamic Access Control (DAC)
    • File Server Resource Manager (FSRM)
    • File Classification Infrastructure (FCI)
    • Network File System (NFS)
    • Work Folders

Scale-Out File Server (SOFS) Benefits

Fundamentally, a Scale-Out File Server is a Failover Cluster running the File Server role. It keeps the file share path (\ClusterStorageVolume1) continually available so that it can always be accessed. This is critical because Hyper-V VMs us this file path to access their virtual hard disks (VHDs) via the SMB3 protocol. If this file path is unavailable, then the VMs cannot access their VHD and cannot operate.

Additionally, it also provides the following benefits:

  • Deploy Multiple VMs on a Single Disk – SOFS allows multiple VMs running on different nodes to use the same CSV disk to access their VHDs.
  • Active / Active File Connections – All cluster nodes will host the SMB namespace so that a VM can connect or quickly reconnect to any active server and have access to its CSV disk.
  • Automatic Load Balancing of SOFS Clients – Since multiple VMs may be using the same CSV disk, the cluster will automatically distribute the connections. Clients are able to connect to the disk through any cluster node, so they are sent to the server with fewest file share connections. By distributing the clients across different nodes, the network traffic and its processing overhead are spread out across the hardware which should maximize its performance and reduce bottlenecks.
  • Increased Storage Traffic Bandwidth – Using SOFS, the VMs will be spread across multiple nodes. This also means that the disk traffic will be distributed across multiple connections which maximizes the storage traffic throughput.
  • Anti-Affinity – If you are hosting similar roles on a cluster, such as two active/active file shares for a SOFS, these should be distributed across different hosts. Using the cluster’s anti-affinity property, these two roles will always try to run on different hosts eliminating a single point of failure.
  • CSV Cache – SOFS files which are frequently accessed will be copied locally on each cluster node in a cache. This is helpful if the same type of VM file is read many times, such as in VDI scenarios.
  • CSV CHKDSK – CSV disks have been optimized to skipping the offline phase, which means that they will come online faster after a crash. Faster recovery time is important for high-availability since it minimizes downtime.

Scale-Out File Server (SOFS) Cluster Architecture

This section will explain the design fundaments of Scale-Out File Servers for Hyper-V. The SOFS can run on the same cluster as the Hyper-V VMs it is supporting, or on an independent cluster. If you are running everything on a single cluster, the SOFS must be deployed as a File Server role directly on the cluster; it cannot run inside a clustered VM since that VM won’t start without access to the File Server. This would cause a problem since neither the VM nor the virtualized File Server could start-up since they have a dependency on each other.

Hyper-V Storage and Failover Clustering

When Hyper-V was first introduced with Windows Server 2008 Failover Clustering, it had several limitations that have since been addressed. The main challenge was that each VM required its own cluster disk, which made the management of cluster storage complicated. Large clusters could require dozens or hundreds of disks, one for each virtual machine. This was sometimes not even possible due to limitations created by hardware vendors which required a unique drive letter for each disk. Technically you could run multiple VMs on the same cluster disk, each with their own virtual hard disks (VHDs). However, this configuration was not recommended, because if one VM crashed and had to failover to a different node, it would force all the VMs using that disk to shut down and failover to other nodes. This causes unplanned downtime, and as virtualization becomes more popular, a cluster-aware file system was created known as Cluster Shared Volumes (CSV). See Figure 1 (above) for the basic architecture of a cluster using traditional cluster disks.

Cluster Shared Volume (CSV) Disks and Failover Clustering

CSV Disks were introduced in Windows Server 2008 R2 as a distributed file system that is optimized for Hyper-V VMs. The disk must be visible by all cluster nodes, use NTFS or ReFS, and can be created from pools of disks using Storage Spaces.

The CSV disk is designed to host VHDs from multiple VMs from different nodes and run them simultaneously. The VMs can distribute themselves across the cluster nodes, balancing the hardware resources which they are consuming. A cluster can host multiple CSV disks and their VMs can freely move around the cluster, without any planned downtime. The CSV disk traffic communicates over standard networks using SMB, so traffic can be routed across different cluster communication paths for additional resiliency, without being restricted to use a SAN.

A Cluster Shared Volumes disk functions similar to a file share hosting the VHD file since it provides storage and controls access. Virtual machines can access their VHDs like clients would access a file hosted in a file share using a path like \ClusterStorageVolume1. This file path is identical on every cluster node, so as a VM moves between servers it will always be able to access its disk using the same file path. Figure 2 shows a Failover Cluster storing its VHDs on a CSV disk. Note that multiple VHDs for different VMs on different nodes can reside on the same disk which they access through the SMB Share.

A Failover Cluster with a Cluster Shared Volumes (CSV) Disk

Figure 2 – A Failover Cluster with a Cluster Shared Volumes (CSV) Disk

Scale-Out File Server (SOFS) and Failover Clustering

The SMB file share used for the CSV disk must be hosted by a Windows Server File Server. However, the file share should also be highly-available so that it does not become a single point of failure. A clustered File Server can be deployed as a SOFS through Failover Cluster Manager as described at the end of this article.

The SOFS will publish the VHD’s file share location (known as the “CSV Namespace”) on every node. This active/active configuration allows clients to be able to access their storage through multiple pathways. This provides additional resiliency and availability because if one node crashes, the VM will temporarily pause its transactions until it can quickly reconnect to the disk via another active node, but it remains online.

Since the SOFS runs on a standard Windows Server Failover Cluster, it must follow the hardware guidance provided by Microsoft. One of the fundamental rules of failover clustering is that all the hardware and software should be identical. This allows a VM or file server to be able to operate the same way on any cluster node, as all the setting, file paths, and registry settings will be the same. Make sure you run the Cluster Validation tests and follow Altaro’s Cluster Validation troubleshooting guidance if you see any warnings or errors.

The following figure shows a SOFS deployed in the same cluster. The clustered SMB shares create a highly-available CSV namespace allowing VMs to access their disk through multiple file paths.

A Failover Cluster using Clustered SMB File Shares for CSV Disk Access

Figure 3 – A Failover Cluster using Clustered SMB File Shares for CSV Disk Access

Storage Spaces Direct (S2D) with SOFS

Storage Spaces Direct (S2D) lets organizations deploy small failover clusters with no shared storage. S2D will generally use commodity servers with direct-attached storage (DAS) to create clusters that use mirroring to replicate their data between local disks to keep their states consistent. These S2D clusters can be deployed as Hyper-V hosts, storage hosts or in a converged configuration running both roles. The storage uses Scale-Out File Servers to host the shares for the VHD files.

In Figure 4, a SOFS cluster is shown which uses storage spaces direct, rather than shared storage, to host the CSV volumes and VHD files. Each CSV volume and its respective VHDs are mirrored between each of the local storage arrays.

 A Failover Cluster with Storage Spaces Direct (S2D)

Figure 4 – A Failover Cluster with Storage Spaces Direct (S2D)

Infrastructure Scale-Out File Server (SOFS)

Windows Server 2019 introduced a new Scale-Out File Server role called the Infrastructure File Server. This functions as the traditional SOFS, but it is specifically designed to only support Hyper-V virtual infrastructure with no other types of roles. There can also be only one Infrastructure SOFS per cluster.

The Infrastructure SOFS can be created manually via PowerShell or automatically when it is deployed by Windows Azure Stack or System Center Virtual Machine Manager (SCVMM). This role will automatically create a CSV namespace share using the syntax \InfraSOFSNameVolume1. Additionally, it will enable the Continuous Availability (CA) setting for the SMB shares, also known as SMB Transparent Failover.

Infrastructure File Server Role on a Windows Server 2019 Failover Cluster

Figure 5 – Infrastructure File Server Role on a Windows Server 2019 Failover Cluster

Cluster Sets

Windows Server 2019 Failover Clustering introduced the management concept of cluster sets. A cluster set is a collection of failover cluster which can be managed as a single logical entity. It allows VMs to seamlessly move between clusters which then lets organizations create a highly-available infrastructure with almost limitless capacity. To simplify the management of the cluster sets, a single namespace can be used to access the cluster. This namespace can run on a SOFS for continual availability and clients will automatically get redirected to the appropriate location within the cluster set.

The following figure shows two Failover Clusters within a cluster set, both of which are using a SOFS. Additionally, a third independent SOFS is deployed to provide highly-available access to the cluster set itself.

A Scale-Out File Server with Cluster Sets

Figure 6 – A Scale-Out File Server with Cluster Sets

Guest Clustering with SOFS

Acquiring dedicated physical hardware is not required for the SOFS as this can be fully-virtualized. When a cluster runs inside of VMs instead of physical hardware, this is known as guest clustering. However, you should not run a SOFS within a VM which it is providing the namespace for, as it can get into a situation where it cannot start the VM since it cannot access the VM’s own VHD.

Microsoft Azure with SOFS

Microsoft Azure allows you to deploy virtualized guest clusters in the public cloud. You will need at least 2 storage accounts, each with a matching number and size of disks. It is recommended to use at least DS-series VMs with premium storage. Since this cluster is already running in Azure, it can also use a cloud witness for is quorum disk.

You can even download an Azure VM template which comes as a pre-configure two-node Windows Server 2016 Storage Spaces Direct (S2D) Scale-Out File Server (SOFS) cluster.

System Center Virtual Machine Manager (VMM) with SOFS

Since the Scale-Out File Server has become an important role in virtualized infrastructures, System Center Virtual Machine Manager (VMM) has tightly integrated it into their fabric management capabilities.


VMM makes it fairly easy to deploy SOFS throughout your infrastructure on bare-metal or Hyper-V hosts. You can add existing file servers under management or deploy each SOFS throughout your fabric. For more information visit:

When VMM is used to create a cluster set, an Infrastructure SOFS is automatically created on the Management Server (if it does not already exist). This file share will host the single shared namespace used by the cluster set.


Many of the foundational components of a Scale-Out File Server can be deployed and managed by VMM. This includes the ability to use physical disks to create storage pools that can host SOFS file shares. The SOFS file shares themselves can also be created through VMM. If you are also using Storage Spaces Direct (S2D) then you will need to create a disk witness which will use the SOFS to host the file share. Quality of Service (QoS) can also be adjusted to control network traffic speed to resources or VHDs running on the SOFS shares.

Management Cluster

In large virtualized environments, it is recommended to have a dedicated management cluster for System Center VMM. The virtualization management console, database, and services are highly-available so that they can continually monitor the environment. The management cluster can use unified storage namespace runs on a Scale-Out File Server, granting additional resiliency to accessing the storage and its clients.

Library Share

VMM uses a library to store files which may be deployed multiple times, such as VHDs or image files. The library uses an SMB file share as a common namespace to access those resources, which can be made highly-available using a SOFS. The data in the library itself cannot be stored on a SOFS, but rather on a traditional clustered file server.

Update Management

Cluster patch management is one of the most tedious tasks which administrators face as it is repetitive and time-consuming. VMM has automated this process through serially updating one node at a time while keeping the other workloads online. SOFS clusters can be automatically patched using VMM.

Rolling Upgrade

Rolling upgrades refers to the process where infrastructure servers are gradually updated to the latest version of Windows Server. Most of the infrastructure servers managed by VMM can be included in the rolling upgrade cycle which functions like the Update Management feature. Different nodes in the SOFS cluster are sequentially placed into maintenance mode (so the workloads are drained), updated, patched, tested and reconnected to the cluster. Workloads will gradually migrate to the newly installed nodes while the older nodes wait to be updated. Gradually all the SOFS cluster nodes are updated to the latest version of Windows Server.

Internet Information Services (IIS) Web Server with SOFS

Everything in this article so far has referenced SOFS in the context of being used for Hyper-V VMs. SOFS is gradually being adopted by other infrastructure services to provide high-availability to their critical components which use SMB file shares.

The Internet Information Services (IIS) Web Server is used for hosting websites. To distribute the network traffic, usually, multiple IIS Servers are deployed. If they have any shared configuration information or data, this can be stored in the Scale-Out File Server.

Remote Desktop Services (RDS) with SOFS

The Remote Desktop Services (RDS) role has a popular feature known as user profile disks (UPDs) which allows users to have a dedicated data disk stored on a file server. The file share path can be placed on a SOFS to make access to that share highly-available.

SQL Server with SOFS

Certain SQL Server roles have been able to use SOFS to make their SMB connections highly-available. Starting with SQL Server 2012, the SMB file server storage option is offered for SQL Server, databases (including Master, MSDB, Model and TempDB) and the database engine. The SQL Server itself can be standalone or deployed as a failover cluster installation (FCI).

Deploying a SOFS Cluster & Next Steps

Now that you understand the planning considerations, you are ready to deploy the SOFS. From Failover Cluster Manager, you will launch the High Availability Wizard and select the File Server role. Next, you will select the File Server Type. Traditional clustered file servers will use the File Server for general use. For SOFS, select Scale-Out File Server for application data.

The interface is shown in the following figure and described as, “Use this option to provide storage for server applications or virtual machines that leave files open for extended periods of time. Scale-Out File Server client connections are distributed across nodes in the cluster for better throughput. This option supports the SMB protocol. It does not support the NFS protocol, Data Deduplication, DSF Replication, or File Server Resource Manager.”

Installing a Scale-Out File Server (SOFS)

Figure 7 – Installing a Scale-Out File Server (SOFS)

Now you should have a fundamental understanding of the use and deployment options for the SOFS. For additional information about deploying a Scale-Out File Server (SOFS), please visit If there’s anything you want to ask about SOFS, let me know in the comments below and I’ll get back to you!

Go to Original Article
Author: Symon Perriman