Tag Archives: your

Why You Should Use OneDrive for Business

As part of your organization’s journey to the cloud / digital transformation, document storage is key. OneDrive for Business (OD4B) replaces the traditional local “Documents” folder and opens up access to work documents from anywhere, on any device, along with many other capabilities.

This article will look at what OneDrive for Business is, how it compares with personal OneDrive, how to use OD4B, protecting your files and sharing them with others securely and some tips for Microsoft 365 administrators managing OD4B for a business. If you’d like an overview on how to use OneDrive for Business I’ve made the video below which accompanies this article:

[embedded content]

The Basics of OneDrive for Business

OD4B is SharePoint based cloud storage that you license as part of Office / Microsoft 365 that gives each user 1 TB of storage for their documents. You can access these documents from any Windows (the client is built into Windows 10, 1709, or later, but also available for earlier versions) or Mac computer, as well as through apps for Android and iOS. You can also access OD4B in any web browser, one easy way to get there is to log in at www.office.com and clicking on the OneDrive icon.

OD4B in Office.com

OD4B in Office.com

Alternatively, you can right-click on the folder in Windows Explorer on your desktop and select View online.

Right click on OD4B in Windows Explorer

Right-click on OD4B in Windows Explorer

Either way, you end up in the web interface where you can create new Office documents, upload files or folders, sync the content between your machine and the cloud storage (see below) as well as create automation flows through Power Automate.

OD4B web interface

OD4B web interface

Note that if you click on an Office file in the web interface, it’ll open in the web-based version of Word, giving you the option of working on any device where you have access to a browser.

For most people, 1 TB of storage is sufficient but many modern devices don’t come with that amount of internal storage so you may need to choose what to sync to the local device. There are two approaches, you can right-click on a folder or file and select Always keep on this device which will do exactly that (and take up space on your local PC), or Free up space which will delete the local copy but keep the files in the cloud. You can tell the different states with the filled green tick (always on this device) icon, or the white cloud (space freed up). The automatic way is to simply double-click on a file that you need to work on, and the file will be downloaded (green tick on white background), called Available locally, this feature is called Files on demand.

In Windows, there’s also a handy “pop up” menu to see the status of OD4B, see which files have been recently synced, and also lets you pause syncing temporarily.

Pop up menu from OD4B client

Pop up menu from OD4B client

If you’re working in Word, Excel, PowerPoint in both Windows and Mac on a file stored in OD4B (and OneDrive personal / SharePoint Online) it’ll AutoSave your changes without you having to save manually. OD4B will also become the default save location in Word, Excel, etc.

And the “secret” is that OD4B is a just a personal document library in SharePoint Online, managed by the OD4B service.

Choosing syncing options for folders.png

Choosing syncing options for folders

OneDrive versus OneDrive for Business

If you sign up for a free Microsoft account, you get the personal flavor of OneDrive which provides 5GB of storage. You can augment this with a Microsoft 365 personal (1 person) or Home (up to 6 users) subscription providing up to 1TB of storage per user, as well as Office for your PC or Mac.

From an end-user point of view the services are very similar but the business version adds identity federation, administrative control, Data Loss Prevention (DLP), and eDiscovery.

Advanced Features

OD4B provides quite a few advanced features that the casual user might not know about. For instance, when you’re attaching a document to an email, you’ll have the option to attach a link to the document in your OD4B instead of a copy of it. If you’re emailing the document to someone internally in your business or someone externally that you collaborate with, this is a better option as you’ll both still be working on the one file (potentially at the same time, see below) rather than having multiple copies attached to different emails and ending up having to manually reconcile the edits at the end.

Known Folder Move is another feature that you can enable as an administrator. This will redirect the Desktop, Documents, Pictures, Screenshots and Camera Roll folders from a user’s local device to OD4B. This has two benefits; firstly, if a user loses their device or it’s broken, their files will still be there when they log in on a new device, secondly, they can use their local Documents, Pictures, etc. folders as they always have.

There’s also versioning built into OD4B which keeps track of each version as it’s saved, you can access this either in the web interface or by right-clicking on a file in Windows Explorer.

OD4B document versions

OD4B document versions

The Recycle bin in the web UI for OD4B has saved many an IT Pro’s career when the CEO has deleted (“by mistake” – but they swear they never hit delete) an important file. Simply click on the Recycle bin and restore files that were deleted up to 93 days ago (up to 30 days for OneDrive personal). A related feature is OneDrive Restore that lets you recover an entire (or parts of) OD4B, perhaps after all the files have been encrypted by a ransomware attack. It also shows a histogram of versions for each file, making it easy to spot the version you want to restore.

Using AI, OD4B (and SharePoint) will automatically extract text from photos that you store so that you can use it when searching for files, it’ll also automatically provide a transcript for any audio or video file you store. File insights let you see who has viewed and edited a shared file (see below) and get statistics.

If you’re using the app on your smartphone you can scan the physical world (a whiteboard, a document, business card, or photo) with the camera and it’ll use AI to transcribe the capture.

Scanning in the Android app

Scanning in the Android app

Recently, Microsoft added a new feature called Add to OneDrive that lets you add a shortcut in OD4B to folders that others have shared with you or that are shared with you in Teams or SharePoint. Speaking of Teams – sharing files in there will now use the same sharing links functionality that OD4B uses (see below). Even more useful will be the forthcoming ability to move a folder and keep the sharing permissions you have configured for it, and some files (CAD drawings anyone?) the increase of the maximum file size from 15 GB to 100 GB is welcome. And, like all the other cool kids, OD4B (and OneDrive personal) on the web will add a dark theme option.

Collaboration and OneDrive for Business

One of the powerful features of OD4B is the ability to share documents (and folders) with internal and external users. As you might expect, administrators have full control over sharing options (see below) but assuming it’s not turned off or restricted you can right-click on a file or folder and click the blue cloud icon Share option, or click the Share option in the web interface. This lets you share a link to the file or folder with internal and external users, grant access to specific people, make it read-only or allow editing and block the ability to download the document (they have to edit the online, shared copy).

Sharing a file, One Drive For Business

Sharing a file

It’s a good idea to turn on external sharing notifications via email.

Once a document is shared you can also use Co-authoring to work on the document simultaneously, both in the web-based versions of Word and Excel as well as the desktop versions of the Office apps. You can see which parts of a document another user is working on.

Administration

If you’re the administrator for your Office 365 deployment you can access the SharePoint admin center (from the main Microsoft 365 Admin center) and control sharing for both OneDrive and SharePoint. There is also a link to the OneDrive admin center where you have control over other aspects of OD4B as well as the same sharing settings.

Sharing Settings in OD4B Admin Center

Sharing Settings in OD4B Admin Center

The main settings for you to consider here are who your users can share content with. The most permissive setting allows them to share links to documents with anyone, no authentication required (not recommended). The next level up allows your users to invite external users to the organization but they have to sign in (using the same email address that the sharing link was sent to), creating an external user in your Azure Active Directory and thus giving you some control, including the ability to apply Conditional Access to their access. If you only allow sharing with existing external users, you must have another process in place for how to invite external users. And the most restrictive is to only allow sharing with internal users, blocking external sharing. Don’t be fooled by these sliders however, if you set this too restrictive and users need to share documents externally, they will do so using personal email, other cloud storage solutions, etc. They will just not be using OD4B sharing links which at least allows you visibility in audit logs and reports, along with some control.

Under the advanced settings for the links you can configure link expiry in days, prohibiting links that last “forever”. You can also limit links to be view only. The advanced settings for sharing let you black or whitelist particular domains for sharing, preventing further sharing (an external user sharing with another external user) and letting owners see who is viewing their files.

Under Sync you can limit syncing to domain-joined computers and block specific file types. Storage lets you limit the storage quota and set the number of days that OD4B content is kept after a user account is deleted. Device access lets you limit access based on IP address as well as set some restrictions for the mobile apps, whereas the Compliance blade has links to DLP, Retention, eDiscovery, Alerts, and Auditing, all of which are generic Office 365 features. The next blade, Notifications, controls email notifications for sharing and the last blade, while Data migration is a link to an article with tools for migrating to OD4B from on-premises storage.

If you’re considering OD4B, there are handy deployment and administration guides for administrators, both for Enterprises and Small businesses. If, on the other hand, your business is definite about keeping “stuff” on-premises you can use OneDrive with SharePoint server, including 2019.

Note that a recent announcement means that the OD4B admin center functionality will move into the SharePoint Online admin center, but the above functionality will stay intact, just not in a separate portal.

Conclusion

There’s no doubt that cloud storage is a cornerstone of successful digital transformation and if you’re already using Office 365, OneDrive for Business is definitely the best option.

Is Your Office 365 Data Secure?

Did you know Microsoft does not back up Office 365 data? Most people assume their emails, contacts and calendar events are saved somewhere but they’re not. Secure your Office 365 data today using Altaro Office 365 Backup – the reliable and cost-effective mailbox backup, recovery and backup storage solution for companies and MSPs. 

Start your Free Trial now


Go to Original Article
Author: Paul Schnackenburg

Active Directory replication troubleshooting tips and tools

Active Directory uses replication to keep data consistent between your domain controllers. When you create, delete or modify a domain controller, the change is replicated to the other domain controllers in the domain.

Active Directory replication troubleshooting can be tricky because there can be several potential reasons behind a replication failure. Two of the more common causes include a loss of network connectivity or a DNS configuration error. Replication errors can also occur as a result of authentication errors or a situation when the domain controller lacks the hardware resources to keep pace with the current demand. This is by no means a comprehensive list, but rather a rundown of some of the issues that commonly cause Active Directory replication failures.

Check the basics first

When starting the Active Directory replication troubleshooting process, it’s best to check the simple things first. Make sure that the domain controllers are powered on, functioning and able to communicate with one another across the network. It’s also important to make sure your firewalls are configured to allow Remote Procedure Call (RPC) traffic on port 135.

Similarly, take the time to review any recent changes to your network. This might include DNS configuration adjustments, modifications to the network topology or Dynamic Host Configuration Protocol alterations.

In addition, there are several system services that need to be running on your domain controllers for Active Directory replication to work properly. You should use the service control manager or PowerShell’s Get-Service cmdlet to verify the DNS infrastructure, Kerberos authentication protocol, Windows time service (W32time), RPC and network connectivity services are running.

Make sure your domain controller clocks are all in sync. The Active Directory depends on the Kerberos protocol, which is sensitive to clock skew. If the domain controller clocks fall out of sync by more than a few minutes, it will cause Kerberos to stop working, which can cause a variety of problems.

Begin Active Directory replication troubleshooting with DCDiag

Windows provides several native tools to help you figure out why you are experiencing problems with Active Directory replication. One of the first tools to try is DCDiag.

DCDiag is a general-purpose Active Directory diagnostic tool that is not specifically designed for troubleshooting Active Directory replication failures, but it is a great tool to start with. The reason for this is many times Active Directory replication issues are a symptom of a deeper problem. If your Active Directory is suffering from troubles that extend beyond simple replication problems, then the DCDiag tool can help pinpoint those issues.

To use the DCDiag tool, open an elevated command prompt window on a domain controller experiencing replication problems. Next, enter the DCDiag command. When you do, Windows will run a series of tests designed to assess the health of various Active Directory components. You can see an example of this in Figure 1.

DCDiag tool test
Figure 1. Using the DCDiag tool to test the health of Active Directory.

If the DCDiag tool does not detect any problems, then you might consider running it on each domain controller within the domain. Occasionally, you may find that the tool returns very different results depending where it runs.

Try the Active Directory Replication Status tool

Once you have verified the overall health of your Active Directory environment, you should run the Active Directory Replication Status tool, provided by Microsoft at this link.

This tool, which you can see in Figure 2, discovers your Active Directory environment and provides information about the state of replication on the domain controllers.

Active Directory Replication Status tool
Figure 2. The Active Directory Replication Status tool checks the replication status for the domain controllers in your forest or domain.

To start, use the workspace on the left side of the tool to select either your forest or a specific domain within the forest. After your selection, click the Refresh Replication Status button. When you do, the tool collects information from your domain controllers and displays the results. The Environment Discovery tab, which you can see in the previous figure, will display the Active Directory nodes and the status of each. Similarly, the Replication Status Collection Details tab, shown in Figure 3, displays where replication is succeeding and where it is failing.

Domain controller replication status
Figure 3. The Active Directory Replication Status tool shows the current Active Directory replication status for each domain controller.

Get additional details from the Replication Status Admin tool

The Replication Status Admin tool, often referred to as RepAdmin, is one of the most widely used tools for troubleshooting Active Directory replication problems. When you run this tool on a domain controller and use the /showrepl switch, it will show all the inbound replication partner domain controllers, as well as the status of the most recent replication attempt from each. You can see what this looks like in Figure 4.

Replication Status Admin tool
Figure 4. The RepAdmin tool gives a rundown of the replication status of your Active Directory.

For the purposes of this article, we ran the RepAdmin tool on a domain controller in a small Active Directory domain. In larger environments, it may be helpful to export the information to a CSV file rather than display it on screen. That way, you can sort and filter the information as needed. To create a CSV file, use this command:

RepAdmin /Showrepl * /CSV > showrepl.csv

One last bit of advice

The tools and techniques discussed in this article should help get you started with your Active Directory replication troubleshooting method. However, if you are pressed for time and need a quick resolution, you can forcibly remove the malfunctioning domain controller from the domain and then add it back in. This will almost always either resolve the issue or yield additional clues as to why the problem is happening.

Go to Original Article
Author:

Use technology to manage your well-being

10 tips to use technology to manage your well-being

Have you heard the saying, “We’re all in the same storm, but we’re in different boats?” People are experiencing a variety of challenges right now, trying to work, learn, and connect with others. May is Mental Health Awareness Month, and we wanted to share some best practices and technology-related tips to help reduce stress and anxiety.

1. Keep a schedule.

Having a set schedule during the week can be a comfort, giving your days structure, balance, and purpose. Use a basic schedule template, Outlook, or the Microsoft To-Do app(it syncs with Outlook) to plan your day and include self-care. Here are some things to consider including in your schedule:

  • Shower and get dressed. This simple ritual can help kick-start your day.
  • Focus time. This is time when you’re not on conference calls and you can get your work done.
  • Daily movement. It could be an intense workout, stretching, or just a relaxing walk—listen to your body to see what it needs that day.
  • Social time. Time to catch up with friends and family over video calls.
  • “Me time.” Maybe it’s your morning coffee, or a soak in the tub at the end of the day, but make sure you carve out some time just for yourself.
  • Non-screen time to read a book, play a game, do a puzzle, enjoy your hobby, and more.

Mental health is a family practice. Get your kids involved in creating a shared family schedule in Outlook so they have a sense of ownership and control. Help your children find the right kind of self-care to learn best practices for a lifetime.

2. Check in on how people are doing.

Not every poll needs to be about people’s emotions. To add some fun, try creating polls on music, food, TV shows, or whatever you want.

Check in on yourself, too. Stress less, move more, and sleep more soundly with meditations, exercises, and tips available in the Headspace app. And if you have a Microsoft 365 Family or Personal subscription (formerly Office 365), you can get one month free. Learn more about the offer.

3. Keep kids on track.

Use Family Safety app preview or website to help make sure your kids are finding balance, too. Set screen-time limits for specific apps, sites, and games, make sure they’re viewing sites appropriate for their age, and review Family Safety reports to see how much time they’re really spending on their devices.

4. Get things done.

There’s something to be said about checking things off your to-do list, like cleaning out the basement or garage, finally dialing in the backyard or deck space (or your indoor plants), digitizing the family photos, or any other projects that have been on the back burner. Joy can come from accomplishing things. The Microsoft To-Do app is free and is perfect for keeping track, plus it syncs with Outlook so you can easily track to-dos for work and home.

5. Connect with friends and family . . . remotely.

If you’re feeling isolated, get some fun Skype video calls on your calendar like a happy hour with friends or a family call over the weekend.For more ideas, read about creative ways to connect.

6. Learn something new.

If you don’t already have a hobby or pastime to pursue, now’s a great time to try something new that’s piqued your interest. Not sure what you want to try? Here are some places to start:

7. If the future seems uncertain, plan it out.

When things are very much up in the air, it can be comforting to write out your thoughts or even a plan for the future. Instead of letting your mind spin, open Word for the web or OneNote and type up some mitigation plans for whatever you’re worrying about. Use broad strokes here—no one can predict the future, so there’s no need to describe every detail or option. Rest easier knowing that you have a plan you can refer to later, if needed.

Try to write 3 things that you are grateful for every day. These journaling templates can help you get your thoughts down:

8. Keep presentation anxiety at bay.

You may find yourself steeped in new methods of working from home and having to adapt quickly to technologies. Help reduce any remote presentation and online jitters with Presenter Coach and PowerPoint. Get in the habit of taking practice runs through your slides and Presenter Coach will give you a report and suggestions for improvements on things like packing, pitch, filler words, euphemisms, and culturally sensitive terms.

9. When you’re done working, be done.

It can be so tempting to check that email one more time in the evening or over the weekend, just to make sure you’ve taken care of everything and no one is left hanging until tomorrow. Instead, set up an automatic reply in Outlook to let your coworkers know you’re done for the day and will get back to them later. That way, people know what to expect, and you can relax knowing that you’ve set expectations.

Want a way to track how successful you are at unplugging? Use My Analytics Wellbeing to keep track of the days you disconnect after work. My Analytics provides several useful statistics about your work habits that can help you with well-being, focus, network, and collaboration.

10. Disconnect and recharge.

With so many online meetings, happy hours, calls with friends, and binge watching, screen fatigue is a real thing. Put down the devices every day for a while. Doing this at night is a great way to wind down before bed. Here are some ideas on what to do instead:

  • Read a book or printed magazine
  • Work on a project
  • Sit outside (deck, yard) and enjoy the birds, plants, sky, trees
  • Cook a healthy meal
  • Listen to music—really listen, with no other distractions

Get more well-being tips

See recommendations on how Microsoft technology can support you in daily activities. Learn more

These tips are a starting point for using tech to help bring some calm during these times. If you or a loved one is in crisis, there’s help available now.

Go to Original Article
Author: Microsoft News Center

Wanted – 1155 Motherboard

Think or know? Your post here seems to indicate you were willing to sell it with no mention of fault

Then just over a year ago you did say there was a fault but despite having multiple systems have no GPU in any of them to confirm fault (and can I assume price is £50 since you didnt provide one, like I asked in OP)

Is it socket damage then? Thats most likely cause of dubious PCI-E ?

I might be interested if you were a bit more forthcoming with info you clearly DO have 😉 Board revision?

Go to Original Article
Author:

Find and lock down lax Windows share permissions

Keeping your data secure and away from unauthorized users is a complex task, which can be even more difficult if a default setting in Windows gets in your way.

Trying to secure Windows share permissions is a big challenge due to a setting called bypass traverse checking that the OS enables by default. This setting gives access to folders even if the user does not have access rights to any of its parents.

We can remove this authorization with group policy object setting, but it’s there for a reason. Without this setting enabled, you will see a big drop in performance since Windows will check every parent folder to see if the user is allowed to go to the target.

This article will explain how to create a report on Windows share permissions to determine which users have excessive authorizations and how to mend it using PowerShell and Sysinternals.

Gathering file shares and their authorized users

First, we need to find the file shares on the servers and client systems. We could do this either by using the Get-SmbShare command or by calling the win32_share namespace using either Get-CimInstance or Get-WmiObject.

For this example, Get-WmiObject is the preferred way to fetch our shares because it’s a more streamlined approach. Launch the PowerShell Terminal as an admin on a file server and enter the following command:

Get-WMIObject -Class win32_share

Name Path Description
---- ---- -----------
MyShare C:demoshare Demo share
ADMIN$ C:WINDOWS Remote Admin
C C:
C$ C: Default share
D$ D: Default share
E$ E: Default share
IPC$ Remote IPC
print$ C:WINDOWSsystem32spooldrivers Printer Drivers
scripts C:scripts

The PowerShell command outputs all the shares, but it doesn’t show the users with access to them. That’s because the Windows share permissions reside in another namespace called Win32_LogicalShareSecuritySetting:

Get-WmiObject -Class Win32_LogicalShareSecuritySetting

This resulting output doesn’t tell us much either. We need a more comprehensive PowerShell script to generate something more useful:

# Get all shares on the computer
$Shares = Get-WMIObject -Class win32_share

# Variable to processed shares to.
$NetworkShares = [System.Collections.Generic.List[PSCustomObject]]::new()

# Ignore default shares by filtering out '2147483648'
foreach ($Share in $Shares | ? {$_.Type -ne '2147483648' -and $_.Name -ne 'print$'}) {

# Create an object that we'll return
$ShareObject = [PSCustomObject]@{
Name = $Share.Name
Description = $Share.Description
LocalPath = $Share.Path
ACL = [System.Collections.ArrayList]::new()

}
# Get the security settings for the share
$ShareSecurity = Get-WmiObject -Class Win32_LogicalShareSecuritySetting -Filter "name='$($Share.Name)'"

# If security settings exists, build a list with ACLs
if($Null -ne $ShareSecurity){
Try{
$SecurityDescriptor = $ShareSecurity.GetSecurityDescriptor().Descriptor

foreach($AccessControl in $SecurityDescriptor.DACL){

$UserName = $AccessControl.Trustee.Name
$Trustee = $AccessControl.Trustee

If ($Trustee.Domain -ne $Null) {
$UserName = "$($Trustee.Domain)$UserName"
}

If ($Trustee.Name -eq $Null) {
$UserName = $Trustee.SIDString
}

$ShareObject.ACL.Add(
[System.Security.AccessControl.FileSystemAccessRule]::new(
$UserName,
$AccessControl.AccessMask,
$AccessControl.AceType
)
) | Out-Null
}

# Return the share object with the ACLs
$NetworkShares.Add($ShareObject)
}
Catch{
Write-Error $Error[0]
}
}
Else {
Write-Information "No permissions found for $($Share.Name) on $ComputerName"
}
}

The content of the $NetworkShares variable should end up looking similar to the following:

PS51> $NetworkShares

Name Description LocalPath ACL
---- ----------- --------- ---
DemoShare Demo share C:demoshare {System.Security.AccessControl.FileSystemAccessRule}
scripts C:scripts {System.Security.AccessControl.FileSystemAccessRule, System.Security.AccessControl.FileSystemAccessRule}

PS51> $NetworkShares[0].ACL

FileSystemRights : FullControl
AccessControlType : Allow
IdentityReference : Everyone
IsInherited : False
InheritanceFlags : None
PropagationFlags : None

We’ve successfully gathered data about our Windows share permissions, showing who has access to what. That might not be enough because administrators usually assign network share permissions on the NTFS level, not the network share level.

We also need to check the files and folders in the share if there are excessive permissions for other groups, such as Everyone or Domain Users.

Scanning file permissions using AccessChk

We have a list of our file shares. Next, we need to get all the file permissions. The fastest way to do this is by using the AccessChk file utility from the Sysinternals suite and parse the output with PowerShell.

Put AccessChk on your file server and copy the AccessChk64.exe file to your system32 folder. You can either download the utility from the link above or use the following PowerShell code to download it and copy it to your system32 folder:

Invoke-WebRequest -OutFile $env:TEMPAccessChk.zip -Uri https://download.sysinternals.com/files/AccessChk.zip 
Expand-Archive -Path $env:TEMPAccessChk.zip -DestinationPath $env:TEMP -Force
Copy-Item -Path $env:TEMPAccessChk64.exe C:WindowsSystem32AccessChk64.exe

We can use PowerShell to create a wrapper function around AccessChk for use in a script:

Function Invoke-AccessChk {
param(
$Path,
$Principals,
$AccessChkPath = "$env:windirsystem32accesschk64.exe",
[switch]$DirectoriesOnly,
[switch]$AcceptEula

)

# Accept EULA
if($AcceptEula){
& $AccessChkPath /accepteula | Out-Null
}

$Argument = "uqs"
if($DirectoriesOnly){
$Argument = "udqs"
}

$Output = & $AccessChkPath -nobanner -$Argument $Path

Foreach($Row in $Output){

# If it's a row with a file path output the previous object and create a new one
if($Row -match "^S"){
If($Null -ne $Object){
if($Object.Access.Keys.Count -gt 0){
$Object
}
}
$Object = [PSCustomObject]@{
Path = $Row
Access = @{}
}
}

# If it's a row with permissions
if($Row -match "^ [R ][W ]"){
If($Row -match ($Principals -replace "\",'\' -join "|")){

$Row -match "^ (?<Read>[R ])(?<Write>[W ]) (?<Principal>.*)" | Out-Null

$Object.Access[$Matches.Principal] = @{
Read = $Matches.Read -eq 'R'
Write = $Matches.Read -eq 'W'
}

}
}
}
# If it's the last row - output the object once more
if($Object.Access.Keys.Count -gt 0){
$Object
}
}

We can now run Invoke-AccessChk with the network shares stored in the $NetworkShares variable from the previous step. We add to a list of the security principals — without “domain” — to find:

# Invoke-AccessChk will only output files/folders where the following principals have permission:
$RiskPrincipals = @(
'Everyone',
'Domain Users',
'Domain Computers',
'Authenticated Users',
'Users'
)

$RiskyPermissions = Foreach($NetworkShare in $NetworkShares | Select -First 1){

# Only scan directory if it's shared to one of the principals in $RiskPrincipals
$RiskPrincipalExist = $Null -ne ($NetworkShare.ACL.IdentityReference.Value -replace ".*\" | ? {$_ -in $RiskPrincipals})

if($RiskPrincipalExist){
Invoke-AccessChk -Path $NetworkShare.LocalPath -Principals $RiskPrincipals
}

}

The $RiskyPermissions variable will give output similar to this:

PS51> $RiskyPermissions

Path Access
---- ------
C:demoshareFile1.txt {BUILTINUsers, NT AUTHORITYAuthenticated Users}
C:demoshareFolder1picture.png {NT AUTHORITYAuthenticated Users}
C:demoshareFolder1Folder2 {NT AUTHORITYAuthenticated Users}

PS51> $RiskyPermissions[0].Access

Creating a report from several computers and servers

Thus far, you can get a list of all the file shares and check all the files with the PowerShell wrapper for Invoke-AccessChk. One of PowerShell’s many strengths is its ability to scale. PowerShell remoting will take the code we’ve produced to the next level to gather the information from several computers at once.

First, we need a list of computers and servers to scan. If possible, the easiest way is through the Active Directory module from RSAT:

$Computers = (Get-ADComputer -Filter *).dnsHostName

This method might not be an option in larger environments that are heavily segmented. Another approach is to get data from your configuration management database or entering it manually using the following example:

$Computers = @(
'Server1',
'Server2',
'Server3',
'Server4',
'Server5',
'PC1'
# etc
)

Now it’s time to tie all these components in a script that uses PowerShell background jobs to do the following actions on the machines specified in the $Computers parameter:

  • Get all shares that are shared out to one of the principals in $RiskPrincipals.
  • Download AccessChk if it does not already exist.
  • Check the NTFS permission of all shares gathered by AccessChk.
  • Return an object with a list with all files where the security principals in $RiskPrincipals have either read or write permissions.

The computer running the script will then collect the results of all jobs and output it to a CSV file with the name ShareAccessReport.

Remember to run the following as an admin on a computer that has network access to said machines and to accept the EULA for AccessChk by changing $AcceptEula to true:

$Computers = @(
'Server-1',
'Server-2',
'PC-1'
)

# Accept EULA for AccessChk
# CHANGE TO TRUE
$AcceptEula = $false

if(!$AcceptEula){
Write-Warning "Did not accept EULA for AccessChk, can't continue"
break
}

# Principals that we want to scan for
$RiskPrincipals = @(
'Everyone',
'Domain Users',
'Domain Computers',
'Authenticated Users',
'Users'
)

# List of shares that we want to ignore.
# Setting a share name tied to it just in case since it should almost always be that path
$IgnoreShares = @(
'print$'
)

# Scriptblock that we'll send with Invoke-Command
$Scriptblock = {

$RiskPrincipals = $args[0].RiskPrincipals
$IgnoreShares = $args[1].IgnoreShares
$AcceptEula = $args[2].AcceptEula

# Functions to download and use AccessChk
# It utilizes a shell object instead of Expand-Archive for backward compatability
Function Download-AccessChk {
param(
$Url = "https://download.sysinternals.com/files/AccessChk.zip",
$Dest = $env:temp
)
if(Test-Path "$destaccesschk.zip"){
rm $DestAccessCHK.zip -Force
}
(New-Object System.Net.WebClient).DownloadFile($url, "$env:tempAccessChk.zip")
$Shell = New-Object -ComObject Shell.Application
$Zip = $shell.NameSpace("$env:tempAccessChk.zip")
$Destination = $shell.NameSpace("$env:windirsystem32")

$copyFlags = 0x00
$copyFlags += 0x04
$copyFlags += 0x10

$Destination.CopyHere($Zip.Items(), $copyFlags)
}

# The function that utilizes accesschk from part 2
Function Invoke-AccessChk {
param(
$Path,
$Principals,
$AccessChkPath = "$env:windirsystem32accesschk64.exe",
[switch]$DirectoriesOnly,
[switch]$AcceptEula

)

if(!(Test-Path "$env:windirsystem32accesschk64.exe")){
Download-AccessChk
}

# Accept EULA
if($AcceptEula){
& $AccessChkPath /accepteula | Out-Null
}

$Argument = "uqs"
if($DirectoriesOnly){
$Argument = "udqs"
}

$Output = & $AccessChkPath -nobanner -$Argument $Path

Foreach($Row in $Output){

# If it's a row with a file path output the previous object and create a new one
if($Row -match "^S"){
If($Null -ne $Object){
if($Object.Access.Keys.Count -gt 0){
$Object
}
}
$Object = [PSCustomObject]@{
Path = $Row
Access = @{}
}
}

# If it's a row with permissions
if($Row -match "^ [R ][W ]"){
If($Row -match ($Principals -replace "\",'\' -join "|")){

$Row -match "^ (?<Read>[R ])(?<Write>[W ]) (?<Principal>.*)" | Out-Null

$Object.Access[$Matches.Principal] = @{
Read = $Matches.Read -eq 'R'
Write = $Matches.Read -eq 'W'
}

}
}
}
# If it's the last row - output the object once more
if($Object.Access.Keys.Count -gt 0){
$Object
}
}

# Get all the shares by using WMI
$Shares = Get-WmiObject -Class win32_share

# Create an object that we will later return when we're done
$ReturnObject = [PSCustomObject]@{
ComputerName = $ComputerName
NetworkShares = [System.Collections.Generic.List[PSCustomObject]]::new()
AccessibleObjects = @{}
}

# Ignore default shares by filtering out '2147483648'
# Ignore shares in $IgnoreShares
foreach ($Share in $Shares | ? {$_.Type -ne '2147483648'} | ? {$_.Name -notin $IgnoreShares}) {
$ShareObject = [PSCustomObject]@{
Name = $Share.Name
Description = $Share.Description
LocalPath = $Share.Path
ACL = [System.Collections.ArrayList]::new()

}

$ShareSecurity = Get-WMIObject -Class Win32_LogicalShareSecuritySetting -Filter "name='$($Share.Name)'"
if($Null -ne $ShareSecurity){
Try{
$SecurityDescriptor = $ShareSecurity.GetSecurityDescriptor().Descriptor

foreach($AccessControl in $SecurityDescriptor.DACL){

$UserName = $AccessControl.Trustee.Name
$Trustee = $AccessControl.Trustee

If ($Trustee.Domain -ne $Null) {
$UserName = "$($Trustee.Domain)$UserName"
}

If ($Trustee.Name -eq $Null) {
$UserName = $Trustee.SIDString
}

$ShareObject.ACL.Add(
[System.Security.AccessControl.FileSystemAccessRule]::new(
$UserName,
$AccessControl.AccessMask,
$AccessControl.AceType
)
) | Out-Null
}

# Only add network share if it contains a risk user/group

$Match = $False
Foreach($IdentityReference in $ShareObject.ACL.IdentityReference.Value){
Foreach($Pattern in $RiskPrincipals){
if($IdentityReference -Match $Pattern){
$Match = $True
}
}
}
if($Match){
$ReturnObject.NetworkShares.Add($ShareObject)
}
Else {
Write-Verbose "No match for risky groups, not adding"
}

}
Catch{
Write-Error $Error[0]
}
}
Else {
Write-Information "No permissions found for $($Share.Name) on $ComputerName"
}

}
# Get all files from NetworkShares where a principal from $RiskPrincipals have either read or write access
$ReturnObject.NetworkShares | Foreach {
$ReturnObject.AccessibleObjects[$_.Name] = Invoke-AccessChk -Path $_.LocalPath -Principals $RiskPrincipals -AcceptEula:$AcceptEula
}

# Done! Lets return the returnobject:
$ReturnObject
}

# To add to the argument list of Invoke-Job because the remote PowerShell job doesn't have access to our variable space.
$InvokeParam = @{
RiskPrincipals = $RiskPrincipals
IgnoreShares = $IgnoreShares
AcceptEula = $AcceptEULA
}

# Start jobs
$Job = Invoke-Command -AsJob -ComputerName $Computers -ArgumentList $InvokeParam -ScriptBlock $Scriptblock

# Wait for jobs to finish
$Job | Wait-Job

# Collect data from all jobs
$Output = Get-Job | Receive-Job

# Output the output into a CSV
$ToCSV = Foreach($Result in $Output){

Foreach($Key in $Result.AccessibleObjects.Keys) {

# For using Select-Object expressions to get the data out of $Result.AccessibleObjects
# The downside of working a lot with hashtables
$ReadAccess = @{
Name='ReadAccess'
Expression={
$Base = $_.Access
($Base.Keys | ? {$Base[$_].Read}) -join ","
}
}

$WriteAccess = @{
Name='WriteAccess'
Expression={
$Base = $_.Access
($Base.Keys | ? {$Base[$_].Write}) -join ","
}
}

# Select from AccessibleObjects and create property for the principals with ReadAccess and WriteAccess
$Result.AccessibleObjects[$Key] | Select @{Name='ShareName';Expression={$Key}},Path,$ReadAccess,$WriteAccess
}
}
# Export the CSV
$ToCSV | Export-Csv -Path .ShareAccessReport.csv

When the PowerShell job finishes, it will create a full report of the access of the principals in the $RiskyPrincipals variable.

Fixing Windows share permissions

After you review the CSV and find the permissions that need adjusting, there are two ways to correct them. If there are only a few, then the best way is through the GUI. But if there are thousands, then the following command will use the CSV output to speed this along:

# This needs to run locally on the server with the file share.

$UserToRemove = 'Guest'
$CSV = Import-Csv -Path .ShareAccessReport.csv | ? {}
$CSV | ? {$_.ComputerName -eq $env:COMPUTERNAME} | Foreach {
$ACL = Get-Acl -Path $_.Path
$ACL.Access | ? {($_.IdentityReference.Value -replace '.*\') -eq $UserToRemove} | Foreach {
$ACL.Access.Remove($_)
}
}

This PowerShell script will remove all permissions for the Guest security principal.

The first report will usually bring a lot of work though because it will discover a lot of oddities and risks when it comes to your Windows share permissions. But running a solution like this regularly, especially targeted toward shares with sensitive information, will pay off in the end.

Go to Original Article
Author:

Learn to manage Office 365 ProPlus updates

A move to the cloud can be confusing until you get your bearings, and learning how to manage Office 365 ProPlus updates will take some time to make sure they’re done right.

Office 365 is a bit of a confusing name. It is actually a whole suite of programs based on a subscription model, mostly cloud based. However, Office 365 ProPlus is a suite inside a suite: a subset collection of software contained in most Office 365 subscriptions. This package is the client install that contains the programs everyone knows: Word, Excel, PowerPoint and so on.

Editor’s note: Microsoft recently announced it would rename Office 365 ProPlus to Microsoft 365 Apps for enterprise, effective on April 21.

For the sake of comparison, Office 2019, Office 2016 and older versions are the on-premises managed suite with the same products, but with a much slower rollout pace for updates and fixes. Updates for new features are also slower and may not even appear until the next major version, which might not be until 2022 based on Microsoft’s release cadence.

Rolling the suite out hasn’t changed too much for many years. You can push out Office 365 ProPlus updates the same way you do other Windows updates, namely Windows Server Update Service (WSUS) and Configuration Manager. Microsoft gave the latter a recent branding adjustment and is now referring to it as Microsoft Endpoint Configuration Manager.

The Office 365 ProPlus client needs a different approach, because updates are not delivered or designed in the same way as the traditional Office products. You can still use Configuration Manager, but the setup is different.

Selecting the update channel for end users

Microsoft gives you the option to determine when your users will get new feature updates. There are five update channels: Insider Fast, Monthly Channel, Monthly Channel (Targeted), Semi-Annual Channel and Semi-Annual Channel (Targeted). Insider Fast gets updates first, Monthly Channel updates arrive on a monthly basis and Semi-Annual updates come every six months. Users in the Targeted channels get these updates first so they can report back to IT with any issues or other feedback.

You can configure the channel as part of an Office 365 ProPlus deployment with the Office Deployment Toolkit (ODT), but this only works at the time of install. There are two ways to configure the channel after deployment: Group Policy and Configuration Manager.

Using Group Policy for Office 365 ProPlus updates

Using Group Policy, you can set which channel a computer gets by enabling the Update Channel policy setting under Computer ConfigurationPoliciesAdministrative TemplatesMicrosoft Office 2016 (Machine)Updates. This is a registry setting located at HKLMSoftwarePoliciesMicrosoftoffice16.0commonofficeupdateupdatebranch. The options for this value are: Current, FirstReleaseCurrent, InsiderFast, Deferred, FirstReleaseDeferred.

Update Channel policy setting
Managing Office 365 ProPlus updates from Group Policy requires the administrator to select the Enabled option in the Update Channel policy setting.

A scheduled task, which is deployed as part of the Office 365 ProPlus install called Office Automatic Update 2.0, reads that setting and applies the updates.

You can use standard Group Policy techniques to target policies to specific computers or apply the registry settings.

Using Configuration Manager for Office 365 ProPlus updates

You can use Configuration Manager, utilizing ODT or Group Policy, to define which channel a client is in, but it also works as a software update point rather than using WSUS or downloading straight from Microsoft’s servers. With this method, you will need to ensure the Office 365 ProPlus channel builds across all the different deployed channels are available from the software update point in Configuration Manager.

Office 365 ProPlus updates work the same way as other Windows updates: Microsoft releases the update, a local WSUS server downloads them, Configuration Manager synchronizes with the WSUS server to copy the updates, and then Configuration Manager distributes the updates to the distribution points. You need to enable the Office 365 Client product on WSUS for this approach to work.

WSUS server settings
Set up Configuration Manager to handle Office 365 ProPlus updates by selecting the Office 365 Client product on the WSUS server.

It’s also possible to configure clients just to get the updates straight from Microsoft if you don’t want or need control over them.

Caveats for Office 365 ProPlus updates

When checking a client’s channel, the Office 365 ProPlus client will only show the channel it was in during its last update. Only when the client gets a new update will it show which channel it obtained the new update from, so the registry setting is a better way to check the current configuration.

When an Office 365 ProPlus client detects an update, it will download a compressed delta update. However, if you change the client to a channel that is on an older version of Office 365 ProPlus, the update will be much larger but still smaller than the standard Office 365 ProPlus install. Also, if you change the channel multiple times, it can take up to 24 hours for a second version change to be recognized and applied.

As always with any new product: research, test and build your understanding of these mechanisms before you roll out Office 365 ProPlus. If an update breaks something your business needs, you need know how to fix that situation across your fleet quickly.

Go to Original Article
Author:

RTO and RPO: Understanding Disaster Recovery Times

You will focus a great deal of your disaster recovery planning (and rightly so) on the data that you need to capture. The best way to find out if your current strategy does this properly is to try our acid test. However, backup coverage only accounts for part of a proper overall plan. Your larger design must include a thorough model of recovery goals, specifically Recovery Time Objective (RTO) and Recovery Point Objective (RPO).

Ideally, a restore process would contain absolutely everything. Practically, expect that to never happen. This article explains the risks and options of when and how quickly operations can and should resume following systems failure.

Table of Contents

Disaster Recovery Time in a Nutshell

What is Recovery Time Objective?

What is Recovery Point Objective?

Challenges Against Short RTOs and RPOs

RTO Challenges

RPO Challenges

Outlining Organizational Desires

Considering the Availability and Impact of Solutions

Instant Data Replication

Short Interval Data Replication

Ransomware Considerations for Replication

Short Interval Backup

Long Interval Backup

Ransomware Considerations for Backup

Using Multiple RTOs and RPOs

Leveraging Rotation and Retention Policies

Minimizing Rotation Risks

Coalescing into a Disaster Recovery Plan

Disaster Recovery Time in a Nutshell

If a catastrophe strikes that requires recovery from backup media, most people will first ask: “How long until we can get up and running?” That’s an important question, but not the only time-oriented problem that you face. Additionally, and perhaps more importantly, you must ask the question: “How much already-completed operational time can we afford to lose?” The business-continuity industry represents the answers to those question in the acronyms RTO and RPO, respectively.

What is Recovery Time Objective?

Your Recovery Time Objective (RTO) sets the expectation for the answer to, “How long until we can get going again?” Just break the words out into a longer sentence: “It is the objective for the amount of time between the data loss event and recovery.”

Recovery Time Objective RTO

Of course, we would like to make all of our recovery times instant. But, we also know that will not happen. So, you need to decide in advance how much downtime you can tolerate, and strategize accordingly. Do not wait until the midst of a calamity to declare, “We need to get online NOW!” By that point, it will be too late. Your organization needs to build up those objectives in advance. Budgets and capabilities will define the boundaries of your plan. Before we investigate that further, let’s consider the other time-based recovery metric.

What is Recovery Point Objective?

We don’t just want to minimize the amount of time that we lose; we also want to minimize the amount of data that we lose. Often, we frame that in terms of retention policies — how far back in time we need to be able to access. However, failures usually cause a loss of systems during run time. Unless all of your systems continually duplicate data as it enters the system, you will lose something. Because backups generally operate on a timer of some sort, you can often describe that potential loss in a time unit, just as you can with recovery times. We refer to the maximum total acceptable amount of lost time as a Recovery Point Objective (RPO).

Recovery Point Objective RPO

As with RTOs, shorter RPOs are better. The shorter the amount of time since a recovery point, the less overall data lost. Unfortunately, reduced RPOs take a heavier toll on resources. You will need to balance what you can achieve against what your business units want. Allow plenty of time for discussions on this subject.

Challenges Against Short RTOs and RPOs

First, you need to understand what will prevent you from achieving instant RTOs and RPOs. More importantly, you need to ensure that the critical stakeholders in your organization understand it. These objectives mean setting reasonable expectations for your managers and users at least as much as they mean setting goals for your IT staff.

RTO Challenges

We can define a handful of generic obstacles to quick recovery times:

  • Time to acquire, configure, and deploy replacement hardware
  • Effort and time to move into new buildings
  • Need to retrieve or connect to backup media and sources
  • Personnel effort
  • Vendor engagement

You may also face some barriers specific to your organization, such as:

  • Prerequisite procedures
  • Involvement of key personnel
  • Regulatory reporting

Make sure to clearly document all known conditions that add time to recovery efforts. They can help you to establish a recovery checklist. When someone requests a progress report during an outage, you can indicate the current point in the documentation. That will save you time and reduce frustration.

RPO Challenges

We could create a similar list for RPO challenges as we did for RTO challenges. Instead, we will use one sentence to summarize them all: “The backup frequency establishes the minimum RPO”. In order to take more frequent backups, you need a fast backup system with adequate amounts of storage. So, your ability to bring resources to bear on the problem directly impacts RPO length. You have a variety of solutions to choose from that can help.

Outlining Organizational Desires

Before expending much effort figuring out what you can do, find out what you must do. Unless you happen to run everything, you will need input from others. Start broadly with the same type of questions that we asked above: “How long can you tolerate downtime during recovery?” and “How far back from a catastrophic event can you re-enter data?” Explain RTOs and RPOs. Ensure that everyone understands that RPO means recent a loss of recent data, not long-term historical data.

These discussions may require a fair bit of time and multiple meetings. Suggest that managers work with their staff on what-if scenarios. They can even simulate operations without access to systems. For your part, you might need to discover the costs associated with solutions that can meet different RPO and RTO levels. You do not need to provide exact figures, but you should be ready and able to answer ballpark questions. You should also know the options available at different spend levels.

Considering the Availability and Impact of Solutions

To some degree, the amount that you spend controls the length of your RTOs and RPOs. That has limits; not all vendors provide the same value per dollar spent. But, some institutions set out to spend as close to nothing as possible on backup. While most backup software vendors do offer a free level of their product, none of them makes their best features available at no charge. Organizations that try to spend nothing on their backup software will have high RTOs and RPOs and may encounter unexpected barriers. Even if you find a free solution that does what you need, no one makes storage space and equipment available for free. You need to find a balance between cost and capability that your company can accept.

To help you understand your choices, we will consider different tiers of data protection.

Instant Data Replication

For the lowest RPO, only real-time replication will suffice. In real-time replication, every write to live storage is also written to backup storage. You can achieve this many ways, but the most reliable involve dedicated hardware. You will spend a lot, but you can reduce your RPO to effectively zero. Even a real-time replication system can drop active transactions, so never expect a complete shield against data loss.

Real-time replication systems have a very high associated cost. For the most reliable protection, they will need to span geography as well. If you just replicate to another room down the hall and a fire destroys the entire building, your replication system will not save you. So, you will need multiple locations, very high speed interconnects, and capable storage systems.

Short Interval Data Replication

If you can sustain a few minutes of lost information, then you usually find much lower price tags for short-interval replication technology. Unlike real-time replication, software can handle the load of delayed replication, so you will find more solutions. As an example, Altaro VM Backup offers Continuous Data Protection (CDP), which cuts your RPO to as low as five minutes.

As with instant replication, you want your short-interval replication to span geographic locations if possible. But, you might not need to spend as much on networking, as the delays in transmission give transfers more time to complete.

Ransomware Considerations for Replication

You always need to worry about data corruption in replication. Ransomware adds a new twist but presents the same basic problem. Something damages your real-time data. None-the-wiser, your replication system makes a faithful copy of that corrupted data. The corruption or ransomware has turned both your live data and your replicated data into useless jumbles of bits.

Anti-malware and safe computing practices present your strongest front-line protection against ransomware. However, you cannot rely on them alone. The upshot: you cannot rely on replication systems alone for backup. A secondary implication: even though replication provides very short RPOs, you cannot guarantee them.

Short Interval Backup

You can use most traditional backup software in short intervals. Sometimes, those intervals can be just, or nearly, as short as short-term replication intervals. The real difference between replication and backup is the number of possible copies of duplicated data. Replication usually provides only one copy of live data — perhaps two or three at the most — and no historical copies. Backup programs differ in how many unique simultaneous copies that they will make, but all will make multiple historical copies. Even better, historical copies can usually exist offline.

You do not need to set a goal of only a few minutes for short interval backups. To balance protection and costs, you might space them out in terms of hours. You can also leverage delta, incremental, and differential backups to reduce total space usage. Sometimes, your technologies have built-in solutions that can help. As an example, SQL administrators commonly use transaction log backups on a short rotation to make short backups to a local disk. They perform a full backup each night that their regular backup system captures. If a failure occurs during the day that does not wipe out storage, they can restore the previous night’s full backup and replay the available transaction log backups.

Long Interval Backup

At the “lowest” tier, we find the oldest solution: the reliable nightly backup. This usually costs the least in terms of software licenses and hardware. Perhaps counter-intuitively, it also provides the most resilient solution. With longer intervals, you also get longer-term storage choices. You get three major benefits from these backups: historical data preservation, protection against data corruption, and offline storage. We will explore each in the upcoming sections.

Ransomware Considerations for Backup

Because we use a backup to create distinct copies, it has some built-in protection against data corruption, including ransomware. As long as the ransomware has no access to a backup copy, it cannot corrupt that copy. First and foremost, that means that you need to maintain offline backups. Replication requires essentially constant continuity to its replicas, so only backup can work under this restriction. Second, it means that you need to exercise caution around restores when you execute restore procedures. Some ransomware authors have made their malware aware of several common backup applications, and they will hijack it to corrupt backups whenever possible. You can only protect your offline data copies by attaching them to known-safe systems.

Using Multiple RTOs and RPOs

You will need to structure your systems into multiple RTO and RPO categories. Some outages will not require much time to recover from. Some will require different solutions. For instance, even though we tend to think primarily in terms of data during disaster recovery planning, you must consider equipment as well. For instance, if your sales division prints its own monthly flyers and you lose a printer, then you need to establish, RTOs, RPOs, downtime procedures, and recovery processes just for those print devices.

You also need to establish multiple levels for your data, especially when you have multiple protection systems. For example, if you have both replication and backup technologies in operation, then you will set one RPO/RTO value for times when the replication works, and RTO/RPO values for when you must resort to long-term backup. That could happen due to ransomware or some other data corruption event, but it can also happen if someone accidentally deletes something important.

To start this planning, establish “Best Case” and “Worst Case” plans and processes for your individual systems.

Leveraging Rotation and Retention Policies

For your final exercise in time-based disaster recovery designs, we will look at rotation and retention policies. “Rotation” comes from the days of tape backups, when we would decide how often to overwrite old copies of data. Now that high-capacity external disks have reached a low-cost point, many businesses have moved away from tape. You may not overwrite media anymore, or at least not at the same frequency. Retention policies dictate how long you must retain at least one copy of a given piece of information. These two policies directly relate to each other.

Backup Rotation and Retention

In today’s terms, think of “rotation” more in terms of unique copies of data. Backup systems have used “differential” and “incremental” backups for a very long time. The former is a complete record of changes since the last full backup; the latter is a record of changes since the last backup of any kind. Newer backup copies have “delta” and deduplication capabilities. A “delta” backup operates like a differential or incremental backup, but within files or blocks. Deduplication keeps only one copy of a block of bits, regardless of how many times it appears within an entire backup set. These technologies reduce backup time and storage space needs… at a cost.

Minimizing Rotation Risks

All of these speed-enhancing and space-reducing improvements have one major cost: they reduce the total number of available unique backup copies. As long as nothing goes wrong with your media, then this will never cause you a problem. However, if one of the full backups suffer damage, then that invalidates all dependent partial backups. You must balance the number of full backups that you take against the amount of time and bandwidth necessary to capture them.

As one minimizing strategy, target your full backup operations to occur during your organization’s quietest periods. If you do not operate 24 hours per day, that might allow for nightly full backups. If you have low volume weekends, you might take full backups on Saturdays or Sundays. You can intersperse full backups on holidays.

Coalescing into a Disaster Recovery Plan

As you design your disaster recovery plan, review the sections in this article as necessary. Remember that all operations require time, equipment, and personnel. Faster backup and restore operations always require a trade-off of expense and/or resilience. Modest lengthening of allowable RTOs and RPOs can result in major cost and effort savings. Make certain that the key members of your organization understand how all of these numbers will impact them and their operations during an outage.

If you need some help defining RTO and RPO in your organization, let me know in the comments section below and I will help you out!


Go to Original Article
Author: Eric Siron