Perhaps the only thing worse than having a disaster strike your datacenter is the stress of recovering your data and services as quickly as possible. Most businesses need to operate 24 hours a day and any service outage will upset customers and your business will lose money. According to a 2016 study by the Ponemon Institute, the average datacenter outage costs enterprises over $750,000 and lasts about 85 minutes, losing the businesses roughly $9,000 per minute. While your organization may be operating at a smaller scale, any service downtime or data loss is going to hurt your reputation and may even jeopardize your career. This blog is going to give you the best practices for how to recover your data from a backup and bring your services online as fast as possible.
Automation is key when it comes to decreasing your Recovery Time Objective (RTO) and minimizing your downtime. Any time you have a manual step in the process, it is going to create a bottleneck. If the outage is caused by a natural disaster, relying on human intervention is particularly risky as the datacenter may be inaccessible or remote connections may not be available. As you learn about the best practice of detection, alerting, recovery, startup, and verification, consider how you could implement each of these steps in a fully-automated fashion.
The first way to optimize your recovery speed is to detect the outage as quickly as possible. If you have an enterprise monitoring solution like System Center Operations Manager (SCOM), it will continually check the health of your application and its infrastructure, looking for errors or other problems. Even if you have developed an in-house application and do not have access to enterprise tools, you can use Windows Task Manager to set up tasks that automatically check for system health by scanning event logs, then trigger recovery actions. There are also many free monitoring tools such as Uptime Robot which alerts you anytime your website goes offline.
Once the administrators have been alerted, immediately begin the recovery process. Meanwhile, you should run a secondary health check on the system to make sure that you did not receive a false alert. This is a great background task to continually run during the recovery process to make sure that something like a cluster failover or transient network failure does not force your system into restarting if it is actually healthy. If the outage was indeed a false positive, then have a task prepared which will terminate the recovery process so that it does not interfere with the now-healthy system.
If you restore your service and determine that there was data loss, then you will need to make a decision whether to accept that loss or if you should attempt to recover from the last good backup, which can cause further downtime during the restoration. Make sure you can automatically determine whether you need to restore a full backup, or whether a differencing backup is sufficient to give you a faster recovery time. By comparing the timestamp of the outage to the timestamp on your backup(s), you can determine which option will minimize the impact on your business. This can be done with a simple PowerShell script, but make sure that you know how to get this information from your backup provider and pass it into your script.
Once you have identified the best backup, you then need to copy it to your production system as fast as possible. A lot of organizations will deprioritize their backup network since they are only used a few times a day or week. This may be acceptable during the backup process, but these networks need to be optimized during recovery. If you do need to restore a backup, consider running a script that will prioritize this traffic, such as by changing the quality of service (QoS) settings or disabling other traffic which uses that same network.
Next, consider the storage media which the backup is copied before the restoration happens. Try to use your fastest SSD disks to maximize the speed in which the backup is restored. If you decided to backup your data on a tape drive, you will likely have high copy speeds during restoration. However, tape drives usually require manual intervention to find and mount that drive, which should generally be avoided if you want a fully automated process. You can learn more about the tradeoffs of using tape drives and other media here.
Once your backup has been restored, then you need to restart the services and applications. If you are restoring to a virtual machine (VM), then you can optimize its startup time by maximizing the memory which is allocated to it during startup and operations. You can also configure VM prioritization to ensure that this critical VM starts first in case it is competing with other VMs to launch on a host which has recently crashed. Enable QoS on your virtual network adapters to ensure that traffic flows through to the guest operating system as quickly as possible, which will speed up the time to restore a backup within the VM, and also help clients reconnect faster. Whether you are running this application within a VM or on bare metal, you can also use Task Manager to enhance the priority of the important processes.
Now verify that your backup was restored correctly and your application is functioning as expected by running some quick test cases. If you feel confident that those tests worked, then you can allow users to reconnect. If those tests fail, then work backward through the workflow to try to determine the bottleneck, or simply roll back to the next “good” backup and try the process again.
Anytime you need to restore from a backup, it will be a frustrating experience, which is why testing throughout your application development lifecycle is critical. Any single point of failure can cause your backup or recovery to fail, which is why this needs to be part of your regular business operations. Once your systems have been restored, always make sure your IT department does a thorough investigation into what caused the outage, what worked well in the recovery, and what areas could be improved. Review the time each step took to complete and ask yourself whether any of these should be optimized. It is also a good best practice to write up a formal report which can be saved and referred to in the future, even if you have moved on to a different company.
Hyper-V’s checkpointing system typically does a perfect job of coordinating all its moving parts. However, it sometimes fails to completely clean up afterward. That can cause parts of a checkpoint, often called “lingering checkpoints”, to remain. You can easily take care of these leftover bits, but you must proceed with caution. A misstep can cause a full failure that will require you to rebuild your virtual machine. Read on to find out how to clean up after a failed checkpoint.
Avoid Mistakes When Cleaning up a Hyper-V Checkpoint
The most common mistake is starting your repair attempt by manually merging the AVHDX file into its parent. If you do that, then you cannot use any of Hyper-V’s tools to clean up. You will have no further option except to recreate the virtual machine’s files. The “A” in “AVHDX” stands for “automatic”. An AVHDX file is only one part of a checkpoint. A manual file merge violates the overall integrity of a checkpoint and renders it unusable. A manual merge of the AVHDX files should almost be the last thing that you try.
Also, do not start off by deleting the virtual machine. That may or may not trigger a cleanup of AVHDX files. Don’t take the gamble.
Before you try anything, check your backup application. If it is in the middle of a backup or indicates that it needs attention from you, get through all of that first. Interrupting a backup can cause all sorts of problems.
How to Cleanup a Failed Hyper-V Checkpoint
We have multiple options to try, from simple and safe to difficult and dangerous. Start with the easy things first and only try something harder if that doesn’t work.
Method 1: Delete the Checkpoint
If you can, right-click the checkpoint in Hyper-V Manager and use the Delete Checkpoint or Delete Checkpoint Subtree option:
This usually does not work on lingering checkpoints, but it never hurts to try.
Sometimes the checkpoint does not present a Delete option in Hyper-V Manager.
Sometimes, the checkpoint doesn’t even appear.
In any of these situations, PowerShell can usually see and manipulate the checkpoint.
You can remove all checkpoints on a host at once:
If the script completes without error, you can verify in Hyper-V Manager that it successfully removed all checkpoints. You can also use PowerShell:
This clears up the majority of leftover checkpoints.
Method 3: Create a New Checkpoint and Delete It
Everyone has had one of those toilets that won’t stop running. Sometimes, you get lucky, and you just need to jiggle the handle to remind the mechanism that it needs to drop the flapper ALL the way over the hole. Method 3 is something of a “jiggle the handle” fix. We just tap Hyper-V’s checkpointing system on the shoulder and remind it what to do.
In the Hyper-V Manager interface, right-click on the virtual machine (not a checkpoint), and click Checkpoint:
Now, at the root of all of the VM’s checkpoints, right-click on the topmost and click Delete checkpoint subtree:
If this option does not appear, then our “jiggle the handle” fix won’t work. Try to delete the checkpoint that you just made, if possible.
The equivalent PowerShell is Checkpoint-VM-VMNamedemovm followed by Remove-VMCheckpoint-VMNamedemovm.
Regroup Before Proceeding
I do not how all pass-through disks or vSANs affect these processes. If you have any and the above didn’t work, I recommend shutting the VM down, disconnecting those devices, and starting the preceding steps over. You can reconnect your devices afterward.
If your checkpoint persists after trying the above, then you now face some potentially difficult choices. If you can, I would first try shutting down the virtual machine, restarting the Hyper-V Virtual Machine Management service, and trying the above steps while the VM stays off. This is a bit more involved “jiggle the handle” type of fix, but it’s also easy. If you want to take a really long shot, you can also restart the host. I do not expect that to have any effect, but I have not yet seen everything.
Take a Backup!
Up to this point, we have followed non-destructive procedures. The remaining fixes involve potential data loss. If possible, back up your virtual machine. Unfortunately, you might only have this problem because of a failed backup. In that case, export the virtual machine. I would personally shut the VM down beforehand so as to only capture the most recent data.
If you have a good backup or an export, then you cannot lose anything else except time.
Method 4: Reload the Virtual Machine’s Configuration
This method presents a moderate risk of data loss. It is easy to make a mistake. Check your backup! This is a more involved “jiggle the handle” type of fix.
Shut the VM down
Take note of the virtual machine’s configuration file location, its virtual disk file names and locations, and the virtual controller positions that connect them (IDE 1 position 0, SCSI 2 position 12, etc.)
On each virtual disk, follow the AVHDX tree, recording each file name, until you find the parent VHDX. In Hyper-V Manager, do this with the Inspect button on the VM’s disk sheet, then the Inspect Parent on each subsequent dialog box that opens.
Modify the virtual machine to remove all of its hard disks. If the virtual machine is clustered, you’ll need to do this in Failover Cluster Manager (or PowerShell). It will prompt to create a checkpoint, but since you already tried that, I would skip it.
Export the virtual machine configuration
Delete the virtual machine. If the VM is clustered, record any special clustering properties (like Preferred Hosts), and delete it from Failover Cluster Manager.
Import the virtual machine configuration from step 5 into the location you recorded in step 3. When prompted, choose the Restore option.
This will bring back the VM with its checkpoints. Start at method 1 and try to clean them up.
Reattach the VHDX. If, for some reason, the checkpoint process did not merge the disks, do that manually first. If you need instructions, look at the section after the final method.
Re-establish clustering, if applicable.
We use this method to give Hyper-V one final chance to rethink the error of its ways. After this, we start invoking manual processes.
Method 5: Restore the VM Configuration and Manually Merge the Disks
For this one to work, you need a single good backup of the virtual machine. It does not need to be recent. We only care about its configuration. This process has a somewhat greater level of risk as method 4. Once we introduce the manual merge process, the odds of human error increase dramatically.
Follow steps 1, 2, and 3 from method 4 (turn VM off and record configuration information). If you are not certain about the state of your backup, follow steps 5 and 6 (export and delete the VM). If you have confidence in your backup, or if you already followed step 4 and still have the export, then you can skip step 5 (export the VM).
Manually merge the VM’s virtual hard disk(s) (see the section after the methods for directions). Move the final VHDX(s) to a safe location. It can be temporary.
Restore the virtual machine from backup. I don’t think that I’ve ever seen a Hyper-V backup application that will allow you to only restore the virtual machine configuration files, but if one exists and you happen to have it, use that feature.
Follow whatever steps your backup application needs to make the restored VM usable. For instance, Altaro VM Backup for Hyper-V restores your VM as a clone with a different name and in a different location unless you override the defaults.
Remove the restored virtual disks from the VM (see step 4 of Method 4). Then, delete the restored virtual hard disk file(s) (they’re older and perfectly safe on backup).
Copy or move the merged VHDX file from step 2 back to its original location.
On the virtual machine’s Settings dialog, add the VHDX(s) back to the controllers and locations that you recorded in step 1. .
Check on any supporting tools that identify VMs by ID instead of name (like backup). Rejoin the cluster, if applicable.
This particular method can be time-consuming since it involves restoring virtual disks that you don’t intend to keep. As a tradeoff, it retains the major configuration data of the virtual machine. Altaro VM Backup for Hyper-V will use a different VM ID from the original to prevent collisions, but it retains all of the VM’s hardware IDs and other identifiers such as the BIOS GUID. I assume that other Hyper-V backup tools exhibit similar behavior. Keeping hardware IDs means that your applications that use them for licensing purposes will not trigger an activation event after you follow this method.
Method 6: Rebuild the VM’s Configuration and Manually Merge the Disks
If you’ve gotten to this point, then you have reached the “nuclear option”. The risk of data loss is about the same as method 5. This process is faster to perform but has a lot of side effects that will almost certainly require more post-recovery action on your part.
Access the VM’s settings page and record every detail that you can from every property sheet. That means CPU, memory, network, disk, file location settings… everything. You definitely must gather the VHDX/AVHDX connection and parent-child-grandchild (etc.) order (method 4, step 3). If your organization utilizes special BIOSGUID settings and other advanced VM properties, then record those as well. I assume that if such fields are important to you that you already know how to retrieve them. If not, you can use my free tool.
Check your backups and/or make an export.
Delete the virtual machine (Method 4 step 6 has a screenshot, mind the note about failover clustering as well).
Recreate the virtual machine from the data that you collected in step 1, with the exception of the virtual hard disk files. Leave those unconnected for now.
Follow the steps in the next section to merge the AVHDX files into the root VHDX
Connect the VHDX files to the locations that you noted in step 1 (Method 5 step 7 has a screenshot).
Check on any supporting tools that identify VMs by ID instead of name (like backup). Rejoin the cluster, if applicable.
In the VM’s guest operating system, check for and deal with any problems that arise from changing all of the hardware IDs.
Since you don’t have to perform a restore operation, it takes less time to get to the end of this method than method 5. Unfortunately, swapping out all of your hardware IDs can have negative impacts. Windows will need to activate again, and it will not re-use the previous licensing instance. Other software may react similarly, or worse.
How to Manually Merge AVHDX Files
I put this part of the article near the end for a reason. I cannot over-emphasize that you should not start here.
Prerequisites for Merging AVHDX Files
If you precisely followed one of the methods above that redirected you here, then you already satisfied these requirements. Go over them again anyway. If you do not perform your merges in precisely the correct order, you will permanently orphan data.
Merge the files in their original location. I had you merge the files before moving or copying them for a reason. Each differencing disk (the AVHDXs) contains the FULL path of their parent. If you relocate them, they will throw errors when you attempt to merge them. If you can’t get them back to their original location, then read below for steps on updating each of the files.
You will have the best results if you merge the files in the order that they were created. A differencing disk knows about its parent, but no parent virtual disk file knows about its children. If you merge them out of order, you can correct it — with some effort. But, if any virtual hard disk file changes while it has children, you will have no way to recover the data in those children.
If merged in the original location and in the correct order, AVHDX merging poses no risks.
Manual AVHDX Merge Process in PowerShell
I recommend that you perform merges with PowerShell because you can do it more quickly. Starting with the AVHDX that the virtual machine used as its active disk, issue the following command:
Merge-VHD-Path‘C:LocalVMsdemovmVirtual Hard Disksdemo-data_8EFF0E79-2711-4115-A704-45046FE6C536.avhdx’
Once that finishes, move to the next file in your list. Use tab completion! Double-check the file names from your list!
Once you have nothing left but the root VHDX, you can attach it to the virtual machine.
Manual AVHDX Merge Process in Hyper-V Manager
Hyper-V Manager has a wizard for merging differencing disks. If you have more than a couple of disks to merge, you will find this process tedious.
In Hyper-V Manager, click Edit disk in the far right pane.
Click Next on the wizard’s intro page if it appears.
Browse to the last AVHDX file in the chain.
Choose the Merge option and click Next.
Choose to merge directly to the parent disk and click Next.
Click Finish on the last screen.
Repeat until you only have the root VHDX left. Reattach it to the VM.
Fixing Parent Problems with AVHDX Files
In this section, I will show you how to correct invalid parent chains. If you have merged virtual disk files in the incorrect order or moved them out of their original location, you can correct it.
The above cmdlet will work if the disk files have moved from their original locations. If you had a disk chain of A->B->C and merged B into A, then you can use the above to set the parent of C to A, provided that nothing else happened to A in the interim.
The virtual disk system uses IDs to track valid parentage. If a child does not match to a parent, you will get the following error:
You could use the IgnoreIdMismatch switch to ignore this message, but a merge operation will almost certainly cause damage.
Alternatively, if you go through the Edit Disk wizard as shown in the manual merge instructions above, then at step 4, you can sometimes choose to reconnect the disk. Sometimes though, the GUI crashes. I would not use this tool.
Errors Encountered on AVHDX Files with an Invalid Parent
The errors that you get when you have an AVHDX with an invalid parent usually do not help you reach that conclusion.
encounteredopeningavirtualharddiskinthechainofdifferencingdisks,”:‘The system cannot find the file
Because it lists the child AVHDX in both locations, along with an empty string where the parent name should appear, it might seem that the child file has the problem.
In Hyper-V Manager, you will get an error about “one of the command line parameters”. It will follow that up with a really unhelpful “Property ‘MaxInternalSize’ does not exist in class ‘Msvm_VirtualHardDiskSettingData’. All of this just means that it can’t find the parent disk.
Use Set-VHD as shown above to correct these errors.
Other Checkpoint Cleanup Work
Checkpoints involve more than AVHDX files. Checkpoints also grab the VM configuration and sometimes its memory contents. To root these out, look for folders and files whose names contain GUIDs that do not belong to the VM or any surviving checkpoint. You can safely delete them all. If you do not feel comfortable doing this, then use Storage Migration to move the VM elsewhere. It will only move active files. You can safely delete any files that remain.
What Causes Checkpoints to Linger?
I do not know that anyone has ever determined the central cause of this problem. We do know that Hyper-V-aware backups will trigger Hyper-V’s checkpointing mechanism to create a special backup checkpoint. Once the program notifies VSS that the backup has completed, it should automatically merge the checkpoint. Look in the event viewer for any clues as to why that didn’t happen.
This article focuses on administration and management exclusively for OneDrive for Business. We will cover advice and best practices from my extensive experience working with service ideal for system admins and those actively working with it on a daily basis.
What is Microsoft OneDrive?
Microsoft has two different, but similar services called OneDrive, both of which offer cloud file storage for users. A free version of OneDrive is available to everyone and is often called the “consumer” version. The business version is “OneDrive for Business” and requires a subscription to Microsoft 365 or Office 365. Both look a lot alike but are managed very differently. To add to the mix, Microsoft often refers to OneDrive for Business as simply “OneDrive” in their documentation and even in the UI.
Note: I may refer to OneDrive instead of OneDrive for Business from time to time in this article for the sake of brevity, but I always mean OneDrive for Business unless otherwise stated.
OneDrive for Business has company-wide administration in mind. A service administrator can control the deployment of the synchronization app, network performance, and many other settings. With OneDrive (consumer), there is no management framework. The individual using the service controls their settings.
Where Should Users Save Files?
OneDrive for Business makes it very easy to share files with others, but if you find yourself sharing lots of files, it is recommended to use Teams or SharePoint instead. Teams and SharePoint are simply better for collaboration. For example, with OneDrive, you can’t check-in and check-out a document. Also, in Teams, any document you upload to Teams is available to the entire Team by default, whereas documents you upload to OneDrive are private by default. Also, in Teams, a conversation about a document is shared in a Teams channel rather than via email. The general guidance is if you are working on a file without others involved – use OneDrive for Business. If you need others involved, use a more collaborative service – Teams or SharePoint.
OneDrive for Business uses SharePoint Online as Service
As the service administrator, one of the most important concepts to master is that OneDrive for Business is a special purpose SharePoint document library created automatically for every user in your company. When a user is assigned an Office 365 or Microsoft 365 license, the services automatically create a personal OneDrive for Business document library.
The URL for OneDrive for Business is formatted as follows:
https://<company base name>-my.sharpoint.com/personal/<user-id>
The landing page (shown above) for OneDrive for Business shows “My Files” which are your files. You can also navigate from here to any SharePoint asset, including SharePoint Document Libraries, files hosted for Teams, or other SharePoint content.
Now that you know OneDrive for Business is using SharePoint under the hood, the following guidance makes sense:
To manage the OneDrive sharing settings for your organization, use the Sharing page of the new SharePoint admin center, instead of the Sharing page in the OneDrive admin center. It lets you manage all the settings and latest features in one place.
One main reason OneDrive for Business is well-liked is that it’s so easy to share a document with anyone. You can send someone a URL to a document and relax. It just works, and you won’t hear the dreaded “I can’t open the document” (which is all too common and a huge productivity sink).
The screenshot below exemplifies my point. What’s being shown is the side-by-side sharing experience in Teams vs. OneDrive. Take note! There is no Share option in Teams. You can copy the link to the file, but you must know if the user you send it to has rights to view the document in the Teams library. In OneDrive for Business, however, there is a Share option that allows you to send a URL to anyone. This is called Anonymous Access and is one of the primary reasons users share from OneDrive rather than Teams.
Also, in OneDrive, if you click on Anyone with the link can edit, you can further refine the Sharing options.
As a side note, users frustrated by Teams’ lack of sharing controls can easily open a document or folder in SharePoint instead of Teams (as shown below). In SharePoint, you can share the file with anyone just like in OneDrive. There’s no need to copy a file in Teams to OneDrive to share anonymously. Just open it in SharePoint instead!
Controlling Default Permissions
Many businesses prefer to control who can open company documents. You can change the default settings in the OneDrive administration center, but let’s follow Microsoft’s advice to use SharePoint administration instead.
There are separate controls for External Sharing for SharePoint and OneDrive, ranging from Only people in your Organization to Anyone. However, what a static snapshot does not reveal is that the OneDrive settings cannot be more permissive than SharePoint. If you lower the permission on SharePoint, the permission also lowers on OneDrive. OneDrive can be more restrictive than SharePoint but never less restrictive. Since SharePoint hosts OneDrive files, this makes sense.
These settings are company-wide. Let users know before you make changes to global settings that cause changes in expected behavior. You WILL hear from them, and it generally won’t be a happy face emoji.
Savvy admins can control sharing using options available when you click on More external sharing settings on the same screen shown above:
The option Limit external sharing by domain lets you allow or deny sharing to a particular domain. This can be a great way to go when you want to constrain sharing to a specific set of partners or external resources.
Allow only users in specific security groups to share externally lets you control who can share files with people outside your organization. A security group is an Azure AD object that is generally a collection of users and other groups. After populating the security group with users, you can assign permissions and policies to the group, such as granting the group access to a SharePoint site, a mailbox, or forcing members of the group to use 2-factor authentication.
Consider the following scenario. Marketing is involved with a lot of external sharing, so we want to enable sharing for members of Marketing but deny everyone else, AND we don’t want to have to make adjustments every time someone moves into or out of marketing.
To illustrate how this can be achieved with security groups, I created a security group in Azure AD named Marketing-Org and added four users. As employees come and go, members of marketing are added to and removed from this group. (If you haven’t created security groups in Azure AD, it’s straightforward.)
Next, (shown below) I created another security group called External-Sharing.
Security groups can have other security groups as members! By adding Marketing-Org to External-Sharing, the users in Marketing-Org automatically inherit External-Org permissions and policies
After that, I assigned the sharing permissions to the External-Org group. Returning to the SharePoint admin center Policies->Sharing->More external sharing settings-> Allow only users in specific security group to share externally. Then, by clicking on Manage Security Groups (shown below), I added the External-Sharing group and set them so they can share with Anyone. To limit the ability of everyone else, I added the built-in security group Everyone except external users and set them to share with Authenticated guests only.
In this way, everyone in the company can only share with authenticated guests, whereas only the members of External-Sharing can share with anyone.
The screenshot below shows the result. The user on the left is not a member of the External-Sharing group (the Anyone option is grey and cannot be selected). However, the user on the right can.
Once configured, effective administrators can manage membership of the security groups using PowerShell with the Add-AzureADGroupMember and associated cmdlets.
Storage space per user
Most Microsoft 365 and Office 365 plans come with 1TB of storage per user for OneDrive. If there are more than 5 users on a plan, 1TB can be increased by administrators to 5TB. You can even go to 25TB on a user-by-user basis by filing a support ticket with Microsoft.
To increase the storage limit for all users, browse to the OneDrive administration console, and select Storage. Change the setting from 1024 to the new limit. Shown below is updating the limit to 5TB. There are no additional charges for the increase in capacity.
You have to construct the OneDrive URL from the company name and user name, as mentioned earlier. Then, find the user name from the list of active users in the Office or Microsoft 365 admin center.
For <Quota>, enter a number between 1024 (1MB is the minimum) and 5242880 (for 5 TB). Values are rounded up. 1TB is 1048576.
As of this writing, OneDrive allows files up to 100GB.
In some scenarios, you may want to collect files from others, rather than send files to others. OneDrive for Business makes this easy with the Request Files feature. With this feature, users can send an email asking others to upload content to a specific folder.
To set up a request files email, in the OneDrive UI, select a folder, click on the ellipses (…), and click Request files. You will see a window similar to the one shown below.
After clicking Next, you will see the Send file request window:
The email sent by this form provides a URL for uploading content to the OneDrive for Business folder. Request files is a great way to collect and concentrate needed files into a single location for processing. That said, you need to make sure to enable uploads for the folder locations in the request.
Of course, a savvy administrator is thinking, “Hmm, does this provide a way for these users to upload content forever to this location?”
Shown below is the SharePoint admin center for Policies, Sharing.
With these settings, you can put some boundaries around the ability to upload files to location access given in the Request files invitation. These settings apply to anonymous links sent from OneDrive and SharePoint as well. As a best practice, if you permit users to send links to Anyone, which is enabled by default, you should expire those links. Otherwise, over a period of years, there can be hundreds or thousands of URLs that provide access to your content making access control distressingly challenging or impossible without disabling anonymous access altogether.
Folders must be set to View, edit, and upload as shown above to allow users to upload files in response to a file request.
One of the main features of OneDrive for Business is the ability to synchronize files from a user’s PC or laptop with OneDrive. With the synch service running, users can work on files locally, and the changes are sent to the cloud. Also, well-known folder locations such as Documents can be synchronized, ensuring essential documents are both local and in the cloud. You can easily sync Teams File Repositories as well as SharePoint Document Libraries.
The synchronization service is part of Windows 10, so you do not generally need to download it individually. Users can install the service by clicking Start and typing OneDrive.
Click on the OneDrive app to launch the setup. OneDrive is then accessible in the taskbar as the cloud icon (shown before logging in, below).
Alternatively, users can enable the client by logging into onedrive.microsoft.com and clicking Sync.
When installed, users can enjoy the integration of OneDrive with Windows File Explorer. A OneDrive location is visible in the File listing. The OneDrive file listing is unique as you can see if a file is in the cloud (cloud icon), local and in the cloud (checkmark), or synchronizing (arrows). Also, when you right-click on a file in the OneDrive folder, you can Share a file, View online, and check the version history.
Pay particular attention to the following icons. Shown below is a screenshot from one that appears during the installation of the OneDrive client.
TAKE NOTE – File on demand enabled by default!
Imagine this scenario. You are working on an important project with several others. A Teams site is used for collaboration. You’re headed out for an important meeting with your clients, and a colleague posts several important files to Teams. You’ve installed the sync client, and you’re headed off to the airport, so you think “no worries, I’ve got them synced to my laptop, and I can view them in flight.” Aloft, you open your laptop and see there is a cloud icon next to files. Clicking on a file, it’s not accessible. What happened?
What happened is the Files On-Demand is enabled by default.
Files On-Demand marks content that appears in the cloud as cloud-only. A file added to a Teams File Repository will not automatically sync locally. It’s not available offline until you open the file, or set the file or folder to Always keep on this device. Optionally, you could also disable Files On-Demand, which we’ll get to in a minute.
For an important file or folder, right-click in Windows Explorer and select Always keep on this device. Users can also disable Files On-Demand in the OneDrive client by opening the client and clicking More->Settings->Settings, then clear the checkbox that reads Files On-Demand.
When you clear the checkbox, a pop-up message says that, indeed, the files will download to your PC instead of being cloud-only.
Be advised that as the message above states, if your files in OneDrive for Business take up, say, 1TB, then that 1TB will be downloaded to your PC. Local storage needs to allow for this. Also, administrators need to consider the impact on bandwidth should you disable Files On-Demand for many users at the same time.
As an alternative, consider instructing users to mark files and folders they want to always be available offline “Always available on this device” using Windows File Explorer as previously discussed. Then you can keep Files On-Demand enabled to preserve bandwidth as only the designate files and folder will be permanently synched, while those you open, will be temporarily synched. All others will reside in the cloud.
For small businesses, administrators can manage OneDrive for Business effectively with the OneDrive for Business administration console. Larger organizations will be interested in using policy. The policy system for Microsoft and Office 365 is considered the most efficient way to manage many settings including those for OneDrive for Business. Policy-based administration provides administrators control, scale, repeatability, and flexibility.
Policy automation can be a complicated topic and breaks into different scenarios depending on your network architecture and configuration. For those with on-premise Active Directory environments, you manage policy via SCCM or Azure AD Domain Services.
If your environment is cloud-only (meaning, you are not using domain controllers locally), using Microsoft’s InTune service lets you deploy the OneDrive sync service to desktops using the Microsoft Endpoint Manager admin center.
You can also create and apply profiles to users that control OneDrive behavior. Shown below is a policy profile limiting the client upload rate to a percentage of available bandwidth. This one of many possible settings to control OneDrive policies in Microsoft Endpoint Manager.
Previously, you saw how you can limit sharing with anonymous users to members of a specific security group. Similarly, you can apply different policy profiles to different security groups.
In this way, you manage the behavior of OneDrive and many other aspects of your cloud service by membership in security groups. It’s easy to imagine uses for this practice with a group for New Hires, Legal-Review-Team, Alliance Partners, Vendors, or other typical roles with differing needs in a busy organization.
In regards to OneDrive, you want to be thoughtful about bandwidth consumption in your company, especially on the initial deployment of OneDrive for Business. More than one company has had issues with essential business services becoming sluggish when hundreds or even thousands of newly deployed OneDrive for Business sync clients start downloading content at the same time. Files On-Demand, as discussed earlier, helps significantly to reduce the initial bandwidth hit as files located in the cloud are not automatically downloaded to clients when enabled.
Known folder moves (discussed next) can also impact network performance by automatically uploading users’ local folders to the cloud when the client is deployed.
In a larger business, you can use policy to push the desired settings, including the ability to mark OneDrive network traffic with QoS settings.
Known Folder Moves
Finally, a feature called Known Folder Moves is of keen interest to administrators as it can help reduce support desk calls and ease users’ transitions to new computers when replaced or upgraded.
As you probably know, specific folders in Windows, such as Documents, Desktop, and Pictures, and others are unique. These are “known folders” as they are in the same location in the file system on every Windows operating system.
OneDrive includes a feature where known folder locations are synced to OneDrive for Business. When a user needs a file in one of these locations and their PC is not available, they can access it from any device, including a mobile device that has an internet connection. Also, when a user moves to a new PC or laptop, all the previous documents, images, and important files are online and can easily be synched back to the new device.
Known Folder Moves can be enabled in the sync client by clicking on Setting->Backup->Manage Backup.
Of course, you can also use policy with the methods previously discussed. Should you decide to roll this out, be mindful of bandwidth impacts and network performance, all that content will be uploaded to the cloud.
OneDrive for Business is an exceptionally useful service. In this article, we’ve discussed many of the key considerations, benefits, best practices, and capabilities of OneDrive for Business so you can effectively manage the service for users. A capable administrator will understand the business use cases for sharing as well as the network impact of OneDrive for Business, and be familiar with how to administer the service including using policy to enforce the desired settings for your Business.
When set up, users will enjoy cloud access to essential files, including their Desktop, Document, Pictures, Team sites, and other files of importance, allowing them to share content quickly and work locally or collaboratively.
Did you know Microsoft does not back up Office 365 data? Most people assume their emails, contacts and calendar events are saved somewhere but they’re not. Secure your Office 365 data today using Altaro Office 365 Backup – the reliable and cost-effective mailbox backup, recovery and backup storage solution for companies and MSPs.
Hyper V » Altaro » It’s Time to Recognize the Real Value of System Admins
If the coronavirus pandemic has taught the business world anything, it’s that the humble system admin should no longer be undervalued. The impact on businesses worldwide and the overnight transition of millions into remote working situations has shed light on the real value of system admins.
The Unsung Heroes of the Modern Workplace
Sysadmins are the pillars of everyday operations in any modern business. They are the silent agents that ensure everyone is actually able to carry out their work. If everything is running smoothly, it’s because they are doing their job. The value of their work often goes unnoticed because ironically that’s their goal – for technical issues to be fixed unnoticed and everyone else carry on unaffected. But apart from keeping the waters calm, they are also ready to spring into action and save your ass to reset that password you forgot for the 17th time that week (not speaking from personal experience or anything…sorry Phil)
As COVID-19 stunned the world and our front-line workers and healthcare rushed to save lives, in the business world, it was system admins who we turned to for help. Transitioning literally millions of employees into remote working is no mean feat but the system admins rose to the challenge and saved businesses on their knees. Furthermore, with this enforced shift to remote working, cyberattacks have been on the rise to try and exploit any new vulnerabilities exposed in the transition but our trusty system admins were there again to protect us.
2020 has been an extremely tough year for many people, but without system admins, it could have been far worse. So, in celebration of SysAdmin Day on 31 July, we decided to give back to our sysadmin heroes in recognition of their hard work.
Rewarding System Admins on SysAdmin Day
If you are an Office 365, Hyper-V, or VMware user, celebrate with us. All you have to do is sign up for a 30-day free trial of either Altaro VM Backup or Altaro Office 365 Backup – it’s your choice! – and you’ll get a guaranteed $/€20 Amazon voucher plus the chance to win one of our grand prizes including SONY WH-1000XM3 Wireless Noise-cancelling headphones, Tri-Band Wi-Fi 6 Router, DJI Osmo Pocket, and more!
Azure Stack HCI gives us the ability to run virtual machines on-premises on hyperconverged infrastructure and to have all that connected to Azure. But what does this mean for IT admins currently using Hyper-V? Let’s find out.
Most people will agree that cloud computing has revolutionized the tech industry and the benefits it brings. It has completely changed the way that IT runs its operations. The cloud provides agility that CIOs now take for granted, such as abstracting services from hardware, dynamic and rapid scalability, empowering DevOps through automation, and consumption-based billing.
While almost every organization wants to leverage this technology, not all workloads make sense on public clouds like Microsoft Azure. This is often due to regulatory restrictions that require customer data and services to be hosted onsite. Microsoft has addressed this challenge by offering on-premises hardware solutions through its partners, in the form of Azure Stack Hyperconverged Infrastructure (HCI). This allows Hyper-V administrators to use Azure services typically reserved for cloud on on-premises workloads.
What is Azure Stack HCI?
Azure Stack HCI is a virtualization-focused operating system that blends Windows Server technologies and hyperconverged infrastructure (HCI) with new Azure hybrid cloud integrations. Azure Stack HCI features the same Hyper-V based software-defined compute, storage, and networking as Azure Stack and shares the same rigorous testing and validation criteria. Microsoft has partnered with dozens of server, storage, and networking vendors to provide validated hyperconverged solutions often referred to as a “datacenter in a box ”.
This new operating system is currently being developed in parallel with the Windows Server family. It aims to strike a balance between the traditional general-purpose OS and a more purpose-driven virtualization solution such as ESXi. Azure Stack HCI focuses itself on being the “go-to OS” for Hyper-V, Failover Clustering, Storage Spaces Direct (S2D), Virtual Desktop Infrastructure (VDI), and Software-Defined Networking. Even though technically a different OS, it still keeps the standard management and maintenance operations familiar to Windows Server and Hyper-V admins.
Azure Stack HCI Benefits
Azure Stack HCI was designed to offer the solutions inherent to the Azure Stack cloud whilst still retaining the benefits that come from a traditional datacenter such as defined security and compliance. As for the cloud solutions brought on-premises, there are quite a few exciting ones (and expecting more to come), but one that really stands out is Azure Monitor. This tracks what is happening across your applications, network, and infrastructure, and stores that telemetry data on Azure Blob Storage. It then uses advanced analytics with AI to make sense of the data and proactively identify problems before they become real.
Similar to ‘Windows Server Core’, Azure Stack HCI has no GUI which directed Microsoft’s focus to increase on providing centralized hybrid cloud management (the fabled Single Pane of Glass). This meant major improvements to Windows Admin Center (WAC) especially with regards to the management of virtualization related roles (the S2D setup wizard is super!) as well as a bunch of new shiny PowerShell cmdlets. Windows Admin Center makes it easy for Enterprises and Managed Service Providers (MSPs) to add the value cloud bring to your on-prem Azure Stack HCI with wizards for setting up solutions such as Azure Site Recovery (DRaaS), Azure Arc (Azure management), Azure File Sync, Cloud Witness (think Failover Quorum but Cloud), Azure Monitor, and Update Management (think WSUS.. but when it works).
Many industry experts believe that Microsoft now has the strongest hybrid cloud solution, and its latest addition, Azure Stack HCI, is certainly a game changer for current Hyper-V users due to its flexibility, ease of use, and seamless integration with Azure. Industry partners seem to agree as many are now starting to test and support Azure Stack HCI.
Free Azure Stack HCI Backup
As announced during Microsoft Inspire 2020, Altaro is an early adopter of the technology and our award-winning backup solution Altaro VM Backup now fully supports backups of Azure Stack HCI 20H2.
One of the key benefits of cloud computing is the resource optimization achieved through hardware abstraction. This article explains how Azure Spot can provide a scalable virtualization management solution saving you from using unnecessary VMs.
What is Azure Spot?
When a workload becomes virtualized, it is no longer bound to any specific server and can be placed on any host with available and compatible resources. This allows organizations to consolidate their hardware and reduce their expenses by optimizing the placement of these virtual machines (VMs) so that they are most efficiently using the underlying hardware. For example, a server running several VMs with only 50% resource utilization could likely host a few more VMs or other services with its remaining capacity. If additional capacity is needed, the lowest priority VMs could be evicted from the host and replaced with more important workloads. Today this is a common practice today in private clouds where the admin has access to the hosts, however, it has been challenging to do this in a public cloud where access to the physical hardware is not provided to users.
Microsoft Azure recently announced the general availability of Azure Spot Virtual Machines, which enables low priority VMs to be evicted and shut down when the host’s resources are needed. This provides equivalent features as Amazon Web Services (AWS) EC2 Spot Instances and Google Cloud Platform (GCP) Preemptible VMs. Similar functionality was provided by Microsoft Azure in the past through Virtual Machine Scale Set (VMSS) using low priority VMs. Spot VMs are replacing low priority VMs and any existing VMSS low priority VMs should be migrated to Spot VMs.
Azure Spot VMs should only be used in specific scenarios, however, they are very cheap to run. Essentially these Spot VMs run on “unsold” cloud capacity, so they are significantly discounted (up to 90%) and can be assigned a capped maximum price. We will now discuss an overview of the technology and key workloads you should move to Azure Spot VMs to minimize your operating costs. If you are a managed service provider (MSP), then these are recommendations that you can pass along to your tenants.
Azure Spot VM Workloads
The most important characteristic of a Spot VM is that it can be tuned off and evicted at any time by another VM when its resources are needed or if the cost becomes too high. This is a hard shutdown, which is equivalent to switching off the power, so you will only have a limited window to save any data or the state of the VM. This means that VMs which are being accessed by customers or contain data that you want to retain are generally not suitable. Here are the most common workloads or tasks deployed with Windows or Linux Spot VMs.
Stateless: A service which just serves a single purpose, and is not required to store any state, such as a web server with static content.
Interruptible: A service that can be stopped at any time, such as with brute force testing application.
Short: A service that can run quickly, such as scalable application.
Batch: A service that collects then batches a series of tasks to maximize hardware utilization for a short burst, such as a scale computation.
Untimed: A service that contains many tasks which can take a long time to complete, but has no specific deadline for completion.
Dev/Test: A service that is commonly used for continuous integration or delivery for a development team.
A Spot VM functions just like a pay-as-you-go VM, with the exception that it can be evicted. If you decide to use Spot VMs for different types of workloads, this is fully supported, but you accept the risks that come with it. 30 seconds before an eviction happens, a notification is provided which can be detected by the system so it can try to save data or close connections before the Spot VM is stopped. You could try to create VM checkpoints at regular intervals, which will allow you to restore a Spot VM to its last saved state, however, you would pay for the storage capacity for the virtual hard disk and its checkpoint(s), even if the VM is offline.
Deploying an Azure Spot VM
It is simple to configure a Spot VM in any Azure region. When creating a new VM using the Azure Portal, Azure PowerShell, Azure CLI or an Azure Resource Manager (ARM) Template, you can specify that the VM should use an Azure Spot instance. Using the Portal as the example, you will find that the Basics > Instance details tab lets you enable the Spot VM functionality (it is disabled by default). If you select a Spot VM, you can define the eviction policy and price controls, which are covered in a later section of this article. The following screenshot shows these settings. You can then create and manage the VM like any other Windows or Linux VM running in Azure.
Figure 1 – Creating an Azure Spot VM from the Azure Portal
Evicting an Azure Spot VM
While an Azure Spot VM gives you a great deal on pricing, the VM can be shut down at any time and preempted by another VM from a customer willing to pay more, or if the cost exceeds the maximum price you have agreed to pay. You will receive a 30-second warning through the event channel that an eviction is about to happen, which may give you a chance to save some data or even take a snapshot, but this should not be used as a reliable way to retain important data. To see these notifications you can subscribe to Scheduled Events, which can trigger any type of task or command, such as attempting to save the data.
When you create the Spot VM, you must select either the Deallocate (default) or Delete eviction policy. The Delete option will permanently delete the VMs and its disks and there will be no additional costs. A Deallocate policy means that the VM has been evicted and placed into a Stopped-Deallocated State, and its VHD file is saved. A deallocated VM can be redeployed later if capacity frees up – although there is not a guarantee that this will ever happen. You also have to pay for the storage used by this file, so be careful if you are running a lot of Spot VMs with this setting. If you want to redeploy that VM, you will have to restart it from the Azure Portal or using Azure PowerShell or CLI. Even if the price goes down in the future, it will not automatically restart.
Virtual Machine Scale Sets (VMSS) and Azure Spot VMs
Azure Spot VMs are fully integrated and compatible with VM Scale Sets (VMSS). A VMSS lets you run a virtualized application behind a load-balancer that can dynamically scale up or down based on user load. By using Spot VMs with a VMSS, you can scale up only when the costs are below the maximum price which you have assigned. To create a Spot VMs set, you can set the priority flag to Spot via the portal, PowerShell, CLI, or in the Azure Resource Manager (ARM) template. At this time, an existing scale set cannot be converted into a Spot scale set, this configuration must be enabled when the scale set is created. To ensure that the set does not grow too large, a Spot VM quota can be applied to the scale set. Automatically scaling the set using the autoscale rules is fully supported, however, any deallocated Spot VMs will count against the quota, even if they are not running. For this reason, it is recommended to use the Delete eviction option for the Spot VMs within a VMSS.
Pricing for Azure Spot VMs
When you create a Spot VM, you will see the current price for the region, image, and VM size which you have selected. The price is always displayed in US Dollars for consistency and transparency, even if you are using a different default currency for billing. The discounted price can range significantly, up to 90% cheaper than the base price. Keep in mind that if you are flexible on the type, size, or location of the VM, that you can browse different options to see the cheapest price.
The current price can increase as more hardware is requested by other users, so you can also set a max price that you are willing to pay. So long as the current price is below the capped price, the VM will stay running. However, if the price becomes more expensive than your max price, the VM will be evicted according to the policy you have specified. If you want to change the max price, you have to first deallocate the VM, update the price, then restart the VM. If you do not care what the max price is, so long as it is cheaper than the standard price, then you can set the max price to -1. In this scenario, you will pay either the discounted price or the standard price, whichever is cheaper.
Quotas for Azure Spot VM
Microsoft Azure has a concept called quotas which allows you to assign a maximum number of VM resources (vCPUs) to a particular user or group. In order to simplify management, Spot VMs have their own quota category which is separate from regular VMs. This allows admins or managed service providers (MSPs) to ensure that only a maximum number of Spot VMs are running, so that they do not accidentally over-deploy VMs which exceed their need or budget. This single quota is used for Spot VMs and Spot VMSS instances for all VMs in an Azure region. The following screenshot shows the Quota Details page for Spot VMs.
Figure 2 – Assigning a Quota to Spot VMs
Licensing for Azure Spot VMs
For the most part, Azure Spot VMs are the same as standard VMs. All VMs are supported as Spot VMs, excluding the B-Series and promo versions (Dv2, NV, NC, H). Spot VMs are also available in all regions, except China. Spot VMs are also not available with free trials nor as a benefit. Cloud Service Providers (CSPs) and managed service providers (MSPs) may offer Spot VMs as a service to their tenants.
Spot VMs will become a popular feature in Azure for any type of recommended workloads. This is another great example of how Microsoft is providing its customers with more deployment options and reducing costs while maximizing their hardware utilization. Are there any other types of workloads you think would work well with Spot VMs? Let us know by posting in the comments section below.
As part of your organization’s journey to the cloud / digital transformation, document storage is key. OneDrive for Business (OD4B) replaces the traditional local “Documents” folder and opens up access to work documents from anywhere, on any device, along with many other capabilities.
This article will look at what OneDrive for Business is, how it compares with personal OneDrive, how to use OD4B, protecting your files and sharing them with others securely and some tips for Microsoft 365 administrators managing OD4B for a business. If you’d like an overview on how to use OneDrive for Business I’ve made the video below which accompanies this article:
The Basics of OneDrive for Business
OD4B is SharePoint based cloud storage that you license as part of Office / Microsoft 365 that gives each user 1 TB of storage for their documents. You can access these documents from any Windows (the client is built into Windows 10, 1709, or later, but also available for earlier versions) or Mac computer, as well as through apps for Android and iOS. You can also access OD4B in any web browser, one easy way to get there is to log in at www.office.com and clicking on the OneDrive icon.
OD4B in Office.com
Alternatively, you can right-click on the folder in Windows Explorer on your desktop and selectView online.
Right-click on OD4B in Windows Explorer
Either way, you end up in the web interface where you can create new Office documents, upload files or folders, sync the content between your machine and the cloud storage (see below) as well as create automation flows through Power Automate.
OD4B web interface
Note that if you click on an Office file in the web interface, it’ll open in the web-based version of Word, giving you the option of working on any device where you have access to a browser.
For most people, 1 TB of storage is sufficient but many modern devices don’t come with that amount of internal storage so you may need to choose what to sync to the local device. There are two approaches, you can right-click on a folder or file and selectAlways keep on this devicewhich will do exactly that (and take up space on your local PC), orFree up spacewhich will delete the local copy but keep the files in the cloud. You can tell the different states with the filled green tick (always on this device) icon, or the white cloud (space freed up). The automatic way is to simply double-click on a file that you need to work on, and the file will be downloaded (green tick on white background), calledAvailable locally, this feature is called Files on demand.
In Windows, there’s also a handy “pop up” menu to see the status of OD4B, see which files have been recently synced, and also lets you pause syncing temporarily.
Pop up menu from OD4B client
If you’re working in Word, Excel, PowerPoint in both Windows and Mac on a file stored in OD4B (and OneDrive personal / SharePoint Online) it’ll AutoSave your changes without you having to save manually. OD4B will also become the default save location in Word, Excel, etc.
And the “secret” is that OD4B is a just a personal document library in SharePoint Online, managed by the OD4B service.
Choosing syncing options for folders
OneDrive versus OneDrive for Business
If you sign up for a free Microsoft account, you get the personal flavor of OneDrive which provides 5GB of storage. You can augment this with a Microsoft 365 personal (1 person) or Home (up to 6 users) subscription providing up to 1TB of storage per user, as well as Office for your PC or Mac.
From an end-user point of view the services are very similar but the business version adds identity federation, administrative control, Data Loss Prevention (DLP), and eDiscovery.
OD4B provides quite a few advanced features that the casual user might not know about. For instance, when you’re attaching a document to an email, you’ll have the option to attach a link to the document in your OD4B instead of a copy of it. If you’re emailing the document to someone internally in your business or someone externally that you collaborate with, this is a better option as you’ll both still be working on the one file (potentially at the same time, see below) rather than having multiple copies attached to different emails and ending up having to manually reconcile the edits at the end.
Known Folder Move is another feature that you can enable as an administrator. This will redirect the Desktop, Documents, Pictures, Screenshots and Camera Roll folders from a user’s local device to OD4B. This has two benefits; firstly, if a user loses their device or it’s broken, their files will still be there when they log in on a new device, secondly, they can use their local Documents, Pictures, etc. folders as they always have.
There’s also versioning built into OD4B which keeps track of each version as it’s saved, you can access this either in the web interface or by right-clicking on a file in Windows Explorer.
OD4B document versions
The Recycle bin in the web UI for OD4B has saved many an IT Pro’s career when the CEO has deleted (“by mistake” – but they swear they never hit delete) an important file. Simply click on the Recycle bin and restore files that were deleted up to 93 days ago (up to 30 days for OneDrive personal). A related feature is OneDrive Restore that lets you recover an entire (or parts of) OD4B, perhaps after all the files have been encrypted by a ransomware attack. It also shows a histogram of versions for each file, making it easy to spot the version you want to restore.
Using AI, OD4B (and SharePoint) will automatically extract text from photos that you store so that you can use it when searching for files, it’ll also automatically provide a transcript for any audio or video file you store. File insights let you see who has viewed and edited a shared file (see below) and get statistics.
If you’re using the app on your smartphone you can scan the physical world (a whiteboard, a document, business card, or photo) with the camera and it’ll use AI to transcribe the capture.
Scanning in the Android app
Recently, Microsoft added a new feature called Add to OneDrive that lets you add a shortcut in OD4B to folders that others have shared with you or that are shared with you in Teams or SharePoint. Speaking of Teams – sharing files in there will now use the same sharing links functionality that OD4B uses (see below). Even more useful will be the forthcoming ability to move a folder and keep the sharing permissions you have configured for it, and some files (CAD drawings anyone?) the increase of the maximum file size from 15 GB to 100 GB is welcome. And, like all the other cool kids, OD4B (and OneDrive personal) on the web will add a dark theme option.
Collaboration and OneDrive for Business
One of the powerful features of OD4B is the ability to share documents (and folders) with internal and external users. As you might expect, administrators have full control over sharing options (see below) but assuming it’s not turned off or restricted you can right-click on a file or folder and click the blue cloud iconShareoption, or click the Share option in the web interface. This lets you share a link to the file or folder with internal and external users, grant access to specific people, make it read-only or allow editing and block the ability to download the document (they have to edit the online, shared copy).
Once a document is shared you can also use Co-authoring to work on the document simultaneously, both in the web-based versions of Word and Excel as well as the desktop versions of the Office apps. You can see which parts of a document another user is working on.
If you’re the administrator for your Office 365 deployment you can access the SharePoint admin center (from the main Microsoft 365 Admin center) and control sharing for both OneDrive and SharePoint. There is also a link to the OneDrive admin center where you have control over other aspects of OD4B as well as the same sharing settings.
Sharing Settings in OD4B Admin Center
The main settings for you to consider here are who your users can share content with. The most permissive setting allows them to share links to documents with anyone, no authentication required (not recommended). The next level up allows your users to invite external users to the organization but they have to sign in (using the same email address that the sharing link was sent to), creating an external user in your Azure Active Directory and thus giving you some control, including the ability to apply Conditional Access to their access. If you only allow sharing with existing external users, you must have another process in place for how to invite external users. And the most restrictive is to only allow sharing with internal users, blocking external sharing. Don’t be fooled by these sliders however, if you set this too restrictive and users need to share documents externally, they will do so using personal email, other cloud storage solutions, etc. They will just not be using OD4B sharing links which at least allows you visibility in audit logs and reports, along with some control.
Under the advanced settings for the links you can configure link expiry in days, prohibiting links that last “forever”. You can also limit links to be view only. The advanced settings for sharing let you black or whitelist particular domains for sharing, preventing further sharing (an external user sharing with another external user) and letting owners see who is viewing their files.
UnderSyncyou can limit syncing to domain-joined computers and block specific file types.Storagelets you limit the storage quota and set the number of days that OD4B content is kept after a user account is deleted.Device accesslets you limit access based on IP address as well as set some restrictions for the mobile apps, whereas theComplianceblade has links to DLP, Retention, eDiscovery, Alerts, and Auditing, all of which are generic Office 365 features. The next blade,Notifications, controls email notifications for sharing and the last blade, whileData migrationis a link to an article with tools for migrating to OD4B from on-premises storage.
Note that a recent announcement means that the OD4B admin center functionality will move into the SharePoint Online admin center, but the above functionality will stay intact, just not in a separate portal.
There’s no doubt that cloud storage is a cornerstone of successful digital transformation and if you’re already using Office 365, OneDrive for Business is definitely the best option.
Is Your Office 365 Data Secure?
Did you know Microsoft does not back up Office 365 data? Most people assume their emails, contacts and calendar events are saved somewhere but they’re not. Secure your Office 365 data today using Altaro Office 365 Backup – the reliable and cost-effective mailbox backup, recovery and backup storage solution for companies and MSPs.
Wide Area Market is in a change right now. When talking to my customers, I see more and more of those customers moving away from classic MPLS and Dark Fibre networks, and with that, many of them are thinking about the value of services like Microsoft Azure ExpressRoute.
With this blog post, I would like to give you some insides into how ExpressRoute is evolving and becoming more valuable in addition to a simple MPLS or Datacenter Interconnect to Azure.
First, let’s get some frequently asked questions away.
What is Azure ExpressRoute?
ExpressRoute is an Azure Service that lets you connect your on-premises networks and network colocations to the Microsoft Edge Network via private connection between you, your connectivity provider, and Microsoft. These connection locations are called Microsoft Enterprise Edge or MSEE and they are distributed around the globe in around 65+ network colocations in different datacenters.
With ExpressRoute you can either access Microsoft Azure Services or even Microsoft 365 Services. Microsoft still does not recommend accessing Microsoft 365 trough ExpressRoute but there is another interesting option on how to use ExpressRoute in extending to regular Azure Service access.
Later in the article when we dig deeper into our main topic, you will learn how to leverage ExpressRoute to connect your datacenters and colocation independently from provider availability in that location. There is a SKU for ExpressRoute called Global Reach, which enables customers to build and extend their backbone through the Microsoft Global Network.
How Does Azure ExpressRoute Work?
As already explained, ExpressRoute is a private connection between the customer and the Microsoft Edge Network. ExpressRoute comes in different flavors.
ExpressRoute or ExpressRoute Direct?
There are two options on how to do the physical interconnect. The first and regular option is to use ExpressRoute through a Network Provider as shown in the example below.
The other option is to use ExpressRoute direct. Here you eliminate the need of a Network Provider and establish a connection with Microsoft yourself.
With a regular ExpressRoute you are limited to what your provider can offer you in regards to locations and you are limited in a maximum of 10GBE bandwidth. Also, you have additional costs on the provider interconnect between you and Microsoft but it is much easier to implement and you do not need high-end routing equipment, colocation space in the MSEE Edge location, or dramedies knowledge on provider peering.
With ExpressRoute Direct you can achieve much higher bandwidth, currently up to 100 GBE. For that, you need to follow some peering policies and technical requirements shown below.
Microsoft Enterprise Edge Router (MSEE) Interfaces:
Dual 10 or 100 Gigabit Ethernet ports only across router pair
Single Mode LR Fiber connectivity
IPv4 and IPv6
IP MTU 1500 bytes
Switch/Router Layer 2/Layer 3 Connectivity:
Must support 1 802.1Q (Dot1Q) tag or two Tag 802.1Q (QinQ) tag encapsulation
Ethertype = 0x8100
Must add the outer VLAN tag (STAG) based on the VLAN ID specified by Microsoft – applicable only on QinQ
Must support multiple BGP sessions (VLANs) per port and device
IPv4 and IPv6 connectivity. For IPv6 no additional sub-interface will be created. IPv6 address will be added to existing sub-interface.
To explain an ExpressRoute Circuit you need to know about the other components of an ExpressRoute in Azure first.
ExpressRoute Gateway: when you want to connect an ExpressRoute Circuit to a virtual network, you need some kind of Gateway. That gateway is called an ExpressRoute Gateway and another “mode” of the Azure Virtual Network Gateway.
Peerings: To establish the routing through an ExpressRoute Circuit you need to configure peerings on the circuit to establish the BGP connection. There are two peering types.
Private Peering: Azure virtual machines and IaaS resources, and Azure PaaS resources that can leverage private Endpoints or VNet integration, that are deployed within a virtual network can be connected and leverage the ExpressRoute Private Peering. The Azure Private Peering can be considered as a trusted extension of a customer core network into Microsoft Azure Datacenters. With private peering, you set up a bidirectional interconnect between your network and your virtual network in Azure. The peering IPs used on the private peering are private IPv4 and later this year IPv6 addresses.
Microsoft Peering: Connectivity to Microsoft online services such as Office 365 and Azure PaaS services are made available via the Microsoft Peering. Microsoft enables bi-directional connectivity between a customer WAN and Microsoft cloud services through the Microsoft global backbone and Microsoft Routing Domain named with AS# 12076. Microsoft Peering can only use public IP addresses owned by the customer or customer connectivity provider. To enable Microsoft Peering, you need to agree to all with that peering connected rules.
Now let’s talk about the circuit itself. The circuit is a so-called NNI, a Network to Network Interconnect.
A network-to-network interface (NNI) is a physical interface that connects two or more networks and defines inter signaling and management processes. It enables the linking of networks using signaling, Internet Protocol (IP) or Asynchronous Transfer Mode (ATM) networks.
How do I set up an Express Route in Azure?
There are three different parts to set up an ExpressRoute.
The first one is the Setup within the Azure Portal. Microsoft published a pretty good guide on how to do it.
The next part would be to setup the provider part of your ExpressRoute, which is highly dependent on your provider. I linked you examples from two providers.
Using ExpressRoute with Global Reach to Interconnect Datacenters
Now, let’s move to the main topic. How can I use ExpressRoute to interconnect my colocation and datacenters.
Why Would you Use ExpressRoute Instead of a Global Network Provider?
There are three main reasons why you maybe want to decide on a combination of ExpressRoute and a local interconnect provider.
Provider availability: When you look into provider availability, you will sooner or later notice that not every provider is available in every region or in every datacenter. When you are in a local datacenter or a region with a limited amount of network providers, you normally have high costs to make your network provider available in that datacenter or on-prem location. Let me show you an example from the peering Database. With ExpressRoute you can select any provider who can make ExpressRoute connections available in that region or datacenter. You do not need the same provider in every location.
Networks in Equinix Dusseldorf
Networks in ITENOS Berlin
Longterm contracts: When you want to interconnect datacenters you mostly need to agree to some kind of long term contract starting with 12 months or more contract time. With providers like Megaport, Equinix, Interxion, and others, you mostly have a pay as you go agreement which can be canceled every month. It is the same with Microsoft ExpressRoute. You can use that interconnect as pay as you go.
Provider Lock In: When working with Network Providers you normally commit to a provider and to change afterward is a huge migration with high time and financial investment. Many customers don’t have that flexibility and overpay on networking.
What do I Need to Enable ExpressRoute for Global Interconnect?
ExpressRoute by default is already a global service but with the main SKUs it can only connect you to Azure Production Regions / Datacenter and not interconnect networks from different providers. To interconnect network providers, you need the ExpressRoute Global Reach Add On.
Looking on an ExpressRoute without Global Reach, the interconnect would look like the following.
When enabling Global Reach, the routing behavior changes as shown below.
While an Azure Production Region is normally three milliseconds from an ExpressRoute Edge, the traffic with Global Reach stays within the Microsoft Edge Network.
Now, let’s think about how that could work with an interconnect strategy. In our scenario, we have the following locations and interconnects.
Europe provided by British Telecom
Hong Kong provided by Equinix
Japan provided by NTT
Normally you would need to ask all of those providers to build an interconnect, or get another colocation or office location to interconnect all these networks yourself. As you can see in the picture, with ExpressRoute Global Reach you can do that trough the Microsoft Backbone and use it as a service from Microsoft Azure. The additional fact is, you do not leverage additional cloud services from Microsoft. You can just use Microsoft as a backbone provider. The figure below shows our scenario simplified.
After you configured the ExpressRoutes with the local providers of your choice you need to setup ExpressRoute Global Reach. There is one downside. ExpressRoute Global Reach is not available in every country where you have Azure Regions. Mostly because there are law or tax regulations which make Microsoft with Global Reach a network service and last-mile network provider. In those cases, Microsoft is mostly solving that over time with special government agreements.
There is a workaround for those countries where Global Reach is not available, like in South Africa, India or Brazil. I will explain the workaround later in the blog.
How to Set Up Global Reach
ExpressRoute Global Reach can only be set up through PowerShell or Azure CLI. There are two options when you set up Global Reach. I will link you both setup guides below.
Afterward, you need to verify the configuration. That must be done via PowerShell or Azure CLI too.
If you simply run $ckt1 in PowerShell, you see CircuitConnectionStatus in the output. It tells you whether the connectivity is established, “Connected”, or “Disconnected”. For more information, you can consult this detailed guide.
To disconnect you also run a command which looks like following.
What is an Alternative when Global Reach is not available?
To use private peering with ExpressRoute Global Reach, it needs to be enabled in the country. As already explained, that’s not the case everywhere.
You can use the global transit architecture with Azure Virtual WAN or an overlay network using a Network Virtual Appliance. In that case, you create an ExpressRoute and do not use the ExpressRoute private Peering. You configure Microsoft Peering with that ExpressRoute. What you have then is the public IP addresses from Azure virtual WAN and the Network Virtual Appliance in that peering. That would enable you to build an IPSec Tunnel through the ExpressRoute to the VPN Gateway. With virtual WAN you would then be able to route through the Microsoft Backbone to the other ExpressRoute Gateways. With an NVA you can leverage User Defined Routes to establish the same transit architecture.
The schematic setup for a solution with Azure virtual WAN could look like following.
With virtual WAN such a solution comes out of the box as soon as ExpressRoute Global Reach becomes available. You only need to enable it and a few minutes later Virtual WAN will switch from IPSec VPN to the connected ExpressRoute. Afterward, you can just decommission the VPN Tunnel.
Hopefully, my post gives you a brief introduction to the possibilities you have with ExpressRoute besides a simple connection to Azure. If you would like to read more about Wide Area Networks with Microsoft Azure, please leave a comment and describe the scenarios you are looking for.
Estimating the real cost of a technology solution for a business can be challenging. There are obvious costs as well as many intangible costs that should be taken into account.
For on-premises solutions, people tend to include licensing and support maintenance contract costs, plus server hardware and virtualization licensing costs. For Software as a Service (SaaS) cloud solutions, it seems like it should be easier since there’s no hardware component, just the monthly cost per licensed user but this simplification can be misleading.
In this article we’re going to look at the complete picture of the cost of Microsoft 365 (formerly Office 365), how choices you as an administrator make can directly influence costs, and how you can help your business maximize the investment in OneDrive, SharePoint, Exchange Online and other services.
There’s no reason to believe that this name change won’t eventually extend to the Enterprise SKUs but until it does, from a licensing cost perspective it’s important to separate the two. Office 365 E1, E3 and E5 gives you the well-known “Office” applications, either web-based or on your device, along with SharePoint Online, Exchange Online and OneDrive for Business in the cloud backend.
Microsoft 365 F3, E3 and E5, on the other hand, includes everything from Microsoft 365 plus Azure Active Directory Premium features (identity security), Enterprise Mobility & Security (EMS) / Intune for Mobile Device Management (MDM) and Mobile Application Management (MAM) along with Windows 10 Enterprise.
Comparing M365 plans
So, a decision that needs to be looked at early when you’re looking to optimize your cloud spend is whether your business is under 300 users and likely to stay that way for the next few years. If that’s the case you should definitely look at the M365 Business SKUs as they may fulfill your business needs, especially as Microsoft recently added several security features from AAD Premium P1 to M365 Business.
If you’re close to 300, expecting to grow or already larger, you’re going to have to pick from the Enterprise offerings. The next question is then, what’s the business need – do you just need to replace your on-premises Exchange and SharePoint servers with the equivalent cloud-based offerings? Or is your business looking to manage corporate-issued mobile devices (smartphones and tablets) with MDM or protecting data on employee-owned devices? The latter is known as Bring Your Own Device (BYOD), sometimes called Bring Your Own Disaster. If you have those needs (and no other MDM in place today), the inclusion of Intune in M365 might be the clincher. If on the other hand you need to protect your on-premises Active Directory (AD) against attacks using Azure Advanced Threat Protection (AATP) or inspect, understand and manage your users’ cloud usage through Microsoft Cloud App Security (MCAS) you’ll also need M365 E5, rather than just O365.
Cloud app security dashboard
The difference is substantial, outfitting 1000 users with O365 E3 will cost you $ 240,000 per year, whereas moving up to M365 E3 will cost you $ 384,000. And springing for the whole enchilada with every security feature available in M365 E5 will cost you $ 684,000, nearly 3X the cost of O365 E3. Thus, you need to know what your business needs and tailor the subscriptions accordingly (see below for picking individual services to match business requirements).
Note that if you’re in the education sector you have different options (O365 A1, A3, and A5 along with M365 A1, A3, and A5) that are roughly equivalent to the corresponding Enterprise offerings but less costly. And charities/not-for-profits have options as well for both O365 and M365. M365 Business Premium is free for up to 10 users for charities and $ 5 per month for additional users.
A la carte instead of bundles
There are two ways to optimize your subscription spend in O365 / M365. Firstly, you can mix licenses to suit the different roles of workers in your business. For instance, the sales staff in your retail chain stores are assigned O365 E1 licenses ($8 / month) because they only need web access to email and documents, the administrative staff in head office use O365 E3 ($20 / month) and the executive suite and other high-value personnel use the full security features in E5 ($35 / month). Substitute M365 F3, E3, and E5 in that example if you need the additional features in M365.
Secondly, you don’t have to use the bundles that are encapsulated in the E3, E5, etc. SKUs, and you can instead pick exactly the standalone services you need to meet your business needs. Maybe some users only need Exchange Online whereas other users only need Project Online. The breakdown of exactly what features are available across all the different plans and standalone services is beyond the scope of this article but the O365 and M365 service descriptions is the best place to start investigating.
Excerpt from the O365 Service Description
And if you’re a larger business (500 users+) you’re not going to pay list prices and instead these licenses will probably be part of a larger, multi-year, enterprise agreement with substantial discounts.
If you hate change
If you want to stay on-premises Exchange Server 2019 is available (only runs on Windows Server 2019), as is SharePoint Server 2019 and you can even buy the “boxed” version of Office 2019 with Word, Excel, etc. with no links to the cloud whatsoever. This is an option that moves away from the monthly subscription cost of M365 (there’s no way to “buy” M365 outright) and back to the traditional way of buying software packages every 2-5 years. Be aware that these on-premises products do NOT offer the same rich features that O365 / M365 provides, whether it’s the super-tight integration between Exchange Online and SharePoint Online, cloud-only services like Microsoft Teams that builds on top the overall O365 fabric or AI-powered design suggestions in the O365 versions of Word or PowerPoint. There’s no doubt that Microsoft’s focus is on the cloud services, these are updated with new features on a daily basis, instead of every few years. If your business is looking to digitally transform, towards tech intensity (two recent buzzwords in IT with a kernel of truth in them) using on-premises servers and boxed software licensing is NOT going to get you there. But if you want to keep going like you always have, it’s an option.
And if you’re looking at this from a personal point of view, a free Microsoft account through Outlook.com does give you access to Office Online: Word, Excel, and PowerPoint in a browser. There’s even a free version of Microsoft Teams available.
Transforming your business
There’s a joke going around at the moment about the Covid-19 pandemic bringing digital transformation to many businesses in weeks that would have taken years to achieve without it. There’s no doubt that adopting the power of cloud services has the power to truly change how you run your business for the better. A good example is moving internal communication from email to Teams, including voice and video calls and perhaps even replacing a phone system with cloud-based phone plans.
But these business improvements depend on the actual adoption of these new tools. And that requires a mindset shift for everyone. Start with your IT department, if they still see M365 as just cloud-hosted versions of their old on-premises servers they’re missing the much bigger picture of the integrated platform that O365 has become. Examples include services such as Data Loss Prevention (DLP), unified labeling and automatic encryption/protection of documents and data, and unified audit logging that spans ALL the workloads. So, make sure you get them on board with seeing O365 as a technology tool to transform the business, not just a place to store emails and documents in OneDrive. And adding M365 unlocks massive security benefits, enabling zero-trust (incredibly important as everyone is working from home), identity-based perimeters, and cloud usage controls. But if your IT or security folks aren’t on board with truly adopting these tools, they’re not going to make you any more secure. Here’s a free IT administrator training for them.
Finally, you’re going to have to bring all the end-users on board with a good Adoption and Change Management (ACM) program, helping everyone understand these new services and what they can do to make their working lives better. This includes training but make sure you look to short, interactive, video-based modules that can be applied just when the user needs coaching on a particular tool, not long classroom-based sessions.
And all of that, for all the different departments, isn’t a once-off when you migrate to O365, it’s an ongoing process because the other superpower of the cloud is that it changes and improves ALL the time. This means you’ll need to assign someone to track the changes that are coming/in preview and ensure that the ones that really matter to your business are understood and adopted. The first place to look is the Microsoft 365 Message Center in the portal where you can also sign up for regular emails with summaries of what’s coming. Another good source is the Office 365 Weekly Blog.
M365 portal Message Center
A great course to help your IT staff is the free Microsoft Service Adoption Specialist (if you want the certificate at the end, it’s going to cost you $99). To help you track your usage and adoption of the different services in O365 there is a usage analytics integration with PowerBI. Use this information to firstly see where adoption can be improved and take steps to help users with those services and secondly to identify services and tools that your business isn’t using and perhaps don’t need, giving you options for changing license levels to optimize your subscription spend.
PowerBI O365 Usage Analytics (courtesy of Microsoft)
There’s another factor to consider as you’re moving from on-premises servers to Microsoft 365 and that’s the changing tasks of your IT staff. Instead of swapping broken hard drives in servers these people now need to be able to manage cloud services and automation with PowerShell and most importantly, see how these cloud services can be adopted to improve business outcomes.
A further potential cost to take into account is backup. Microsoft keeps four copies of your data, in at least two datacentres so they’re not going to lose it but if you need the ability to “go back in time” and see what a mailbox or SharePoint library looked like nine months ago, for instance, you’ll need a third-party backup service, further adding to your monthly cost.
And that’s part of the overall cost of using O365 or M365, training staff, adopting new features, different tasks for administrators and managing change requires people and resources, in other words, money. And that’s got to be factored into the overall cost using Microsoft 365, it’s not just the monthly license cost.
The final question is of course – is it worth it? Speaking as an IT consultant with clients (including a K-12 school with 100 students) who recently moved EVERYONE to work and study from home, supported by O365, Teams, and other cloud services, the answer is a resounding yes! There’s no way we could have managed that transition with only on-premises infrastructure to fall back on.
Is Your Office 365 Data Secure?
Did you know Microsoft does not back up Office 365 data? Most people assume their emails, contacts and calendar events are saved somewhere but they’re not. Secure your Office 365 data today using Altaro Office 365 Backup – the reliable and cost-effective mailbox backup, recovery and backup storage solution for companies and MSPs.
In this article, I will write about a familiar-sounding tool I regularly use to prepare custom images for Azure amongst other tasks. Windows 10 comes with the Windows Client Version of Hyper-V with it built-in so there is no need to download anything extra! It is the same Hyper-V you use within the Server but without the cluster features. Here’s how to configure Hyper-V for Windows 10.
Operating System Prerequisites
First, let us check the prerequisites.
Windows 10 Licensing
Not every license of Windows 10 has Hyper-V enabled. Only the following versions are eligible for Windows 10 Hyper-V.
Windows 10 Professional
Windows 10 Enterprise
Windows 10 Education
You can find your installed Windows Version when using PowerShell and following command.