Tag Archives: disaster

Get back on the mend with Active Directory recovery methods

Active Directory is the bedrock of most Windows environments, so it’s best to be prepared if disaster strikes.

AD is an essential component in most organizations. You should monitor and maintain AD, such as clear out user and computer accounts you no longer need. With routine care, AD will run properly, but unforeseen issues can arise. There are a few common Active Directory recovery procedures you can follow using out-of-the-box technology.

Loss of a domain controller

Many administrators see losing a domain controller as a huge disaster, but the Active Directory recovery effort is relatively simple — unless your AD was not properly designed and configured. You should never rely on a single domain controller in your domain, and large sites should have multiple domain controllers. Correctly configured site links will keep authentication and authorization working even if the site loses its domain controller.

You have two possible approaches to resolve the loss of a domain controller. The first option is to try to recover the domain controller and bring it back into service. The second option is to replace the domain controller. I recommend adopting the second approach, which requires the following actions:

  • Transfer or seize any flexible single master operation roles to an active domain controller. If you seize the role, then you must ensure that the old role holder is never brought back into service.
  • Remove the old domain controller’s account from AD. This will also remove any metadata associated with the domain controller.
  • Build a new server, join to the domain, install AD Directory Services and promote to a domain controller.
  • Allow replication to repopulate the AD data.

How to protect AD data

Protecting data can go a long way to make an Active Directory recovery less of a problem. There are a number of ways to protect AD data. These techniques, by themselves, might not be sufficient. But, when you combine them, they provide a defense in depth that should enable you to overcome most, if not all, disasters.

First, enable accidental deletion protection on all of your organizational units (OUs), as well as user and computer accounts. This won’t stop administrators from removing an account, but they will get warned and might prevent an accident.

protect from accidental deletion option
Select the option to protect from accidental deletion when creating an organizational unit in AD Administrative Center.

Recover accounts from the AD recycle bin

Another way to avoid trouble is to enable the AD recycle bin. This is an optional feature used to restore a deleted object.

Enable-ADOptionalFeature -Identity 'Recycle Bin Feature' -Scope ForestOrConfigurationSet `-Target sphinx.org -Confirm:$false

After installing the feature, you may need to enable it through AD Administrative Center. Once added, you can’t uninstall the recycle bin.

Let’s run through a scenario where a user, whose properties are shown in the screenshot below, has been deleted.

Active Directory user account
An example of a typical user account in AD, including group membership

To check for deleted user accounts, run a search in the recycle bin:

Get-ADObject -Filter {objectclass -eq 'user' -and Deleted -eq $true} -IncludeDeletedObjects

The output for this command returns a deleted object, the user with the name Emily Brunel.

Active Directory recycle bin
An AD object found in the recycle bin

For a particularly volatile AD, you may need to apply further filters to identify the account you wish to restore.

If you have a significant number of objects in the recycle bin, use the object globally unique identifier (GUID) to identify the object to restore.

Get-ADObject -Filter {ObjectGUID -eq '73969b9d-05fa-4b45-a667-79baba1ac9a3'} 
`-IncludeDeletedObjects -Properties * | Restore-ADObject

The screenshot shows the restored object and its properties, including the group membership.

restored Active Directory user account
Restoring an AD user account from recycle bin

Generate AD snapshots

The AD recycle bin helps restore an object, but what do you do when you restore an account with incorrect settings?

To fix a user account in that situation, it helps to create AD snapshots to view previous settings and restore attributes. Use the following command from an elevated prompt:

ntdsutil snapshot 'Activate Instance NTDS' Create quit quit

The Ntdsutil command-line tool installs with AD and generates the output in this screenshot when creating the snapshot.

Active Directory snapshot
The command-line output when creating an AD snapshot

You don’t need to take snapshots on every domain controller. The number of snapshots will depend on the geographic spread of your organization and the arrangement of the administration team.

The initial snapshot captures the entire AD. Subsequent snapshots take incremental changes. The frequency of snapshots should be related to the amount of movement of the data in your AD.

Restore data from a snapshot

In this test scenario, let’s assume that the group memberships of a user account have been incorrectly changed. Run the following PowerShell commands to remove the user’s group memberships:

Remove-ADGroupMember -Identity finance -Members (Get-ADUser -Identity EmilyBrunel) -Confirm:$false
Remove-ADGroupMember -Identity department1 -Members (Get-ADUser -Identity EmilyBrunel) -Confirm:$false
Remove-ADGroupMember -Identity project1 -Members (Get-ADUser -Identity EmilyBrunel) -Confirm:$false

You need to identify the snapshot from which you will restore the data. The following command lists the snapshots:

ntdsutil snapshot 'List All' quit quit
Active Directory snapshots list
The Ntdsutil utility produces a list of the available AD snapshots.

To mount the snapshot, run the following command:

ntdsutil snapshot "mount f828eb4e-3a06-4bcb-8db6-2b07b54f9d5f" quit quit

Run the following command to open the snapshot:

dsamain -dbpath 'C:$SNAP_201909161530_VOLUMEC$WindowsNTDSntds.dit' -ldapport 51389

The Dsamain utility gets added to the system when you install AD Domain Services. Note that the console you use to mount and open the snapshot is locked.

Active Directory snapshot
Mount and open the AD snapshot.

When you view the group membership of the user account in your AD, it will be empty. The following command will not return any output:

Get-ADUser -Identity EmilyBrunel -Properties memberof | select -ExpandProperty memberof

When you view the same account from your snapshot, you can see the group memberships:

Get-ADUser -Identity EmilyBrunel -Properties memberof -Server TTSDC01.sphinx.org:51389  | select -ExpandProperty memberof
CN=Project1,OU=Groups,DC=Sphinx,DC=org
CN=Department1,OU=Groups,DC=Sphinx,DC=org
CN=Finance,OU=Groups,DC=Sphinx,DC=org

To restore the group memberships, run the following:

Get-ADUser -Identity EmilyBrunel -Properties memberof -Server TTSDC01.sphinx.org:51389  | select -ExpandProperty memberof | 
ForEach-Object {Add-ADGroupMember -Identity $_ -Members (Get-ADUser -Identity EmilyBrunel)}

After reinserting the group memberships from the snapshot version of the account, add the user into those groups in your production AD.

Your user account now has the correct group memberships:

Get-ADUser -Identity EmilyBrunel -Properties memberof | select -ExpandProperty memberof
CN=Project1,OU=Groups,DC=Sphinx,DC=org
CN=Department1,OU=Groups,DC=Sphinx,DC=org
CN=Finance,OU=Groups,DC=Sphinx,DC=org

Press Ctrl-C in the console in which you ran Dsamain, and then unmount the snapshot:

ntdsutil snapshot "unmount *" quit quit

Run an authoritative restore from a backup

In the last scenario, imagine you lost a whole OU’s worth of data, including the OU. You could do an Active Directory recovery using data from the recycle bin, but that would mean restoring the OU and any OUs it contained. You would then have to restore each individual user account. This could be a tedious and error-prone process if the data in the user accounts in the OU changes frequently. The solution is to perform an authoritative restore.

Before you can perform a restore, you need a backup. We’ll use Windows Server Backup because it is readily available. Run the following PowerShell command to install:

Install-WindowsFeature -Name Windows-Server-Backup

The following code will create a backup policy and run a system state backup:

Import-Module WindowsServerBackup
$wbp = New-WBPolicy

$volume = Get-WBVolume -VolumePath C:
Add-WBVolume -Policy $wbp -Volume $volume

Add-WBSystemState $wbp

$backupLocation = New-WBBackupTarget -VolumePath R:
Add-WBBackupTarget -Policy $wbp -Target $backupLocation

Set-WBVssBackupOptions -Policy $wbp -VssCopyBackup

Start-WBBackup -Policy $wbp

The following command creates a backup of the system state, including the AD database:

Add-WBSystemState $wbp

The following code creates a scheduled backup of the system state at 8 a.m., noon, 4 p.m. and 8 p.m.

Set-WBSchedule -Policy $wbp -Schedule 08:00, 12:00, 16:00, 20:00
Set-WBPolicy -Policy $wbp

In this example, let’s say an OU called Test with some critical user accounts got deleted.

Reboot the domain controller in which you’ve performed the backup, and go into Directory Services Recovery Mode. If your domain controller is a VM, you may need to use Msconfig to set the boot option rather than using the F8 key to get to the boot options menu.

$bkup = Get-WBBackupSet | select -Last 1
Start-WBSystemStateRecovery -BackupSet $bkup -AuthoritativeSysvolRecovery

Type Y, and press Enter to restore to original location.

At the prompt, restart the domain controller to boot back into recovery mode.

You need to mark the restored OU as authoritative by using Ntdsutil:

ntdsutil
C:Windowssystem32ntdsutil.exe: activate instance NTDS
Active instance set to "NTDS".
C:Windowssystem32ntdsutil.exe: authoritative restore
authoritative restore: restore subtree "ou=test,dc=sphinx,dc=org"

A series of messages will indicate the progress of the restoration, including the number of objects restored.

Exit ntdsutil
authoritative restore: quit
C:Windowssystem32ntdsutil.exe: quit

Restart the domain controller. Use Msconfig before the reboot to reset to a normal start.

The OU will be restored on your domain controller and will replicate to the other domain controllers in AD.

A complete loss of AD requires intervention

In the unlikely event of losing your entire AD forest, you’ll need to work through the AD forest recovery guide at this link. If you have a support agreement with Microsoft, then this would be the ideal time to use it.

Go to Original Article
Author:

Datrium opens cloud DR service to all VMware users

Datrium plans to open its new cloud disaster recovery as a service to any VMware vSphere users in 2020, even if they’re not customers of Datrium’s DVX infrastructure software.

Datrium released disaster recovery as a service with VMware Cloud on AWS in September for DVX customers as an alternative to potentially costly professional services or a secondary physical site. DRaaS enables DVX users to spin up protected virtual machines (VMs) on demand in VMware Cloud on AWS in the event of a disaster. Datrium takes care of all of the ordering, billing and support for the cloud DR.

In the first quarter, Datrium plans to add a new Datrium DRaaS Connect for VMware users who deploy vSphere infrastructure on premises and do not use Datrium storage. Datrium DraaS Connect software would deduplicate, compress and encrypt vSphere snapshots and replicate them to Amazon S3 object storage for cloud DR. Users could set backup policies and categorize VMs into protection groups, setting different service-level agreements for each one, Datrium CTO Sazzala Reddy said.

A second Datrium DRaaS Connect offering will enable VMware Cloud users to automatically fail over workloads from one AWS Availability Zone (AZ) to another if an Amazon AZ goes down. Datrium stores deduplicated vSphere snapshots on Amazon S3, and the snapshots replicated to three AZs by default, Datrium chief product officer Brian Biles said.

Speedy cloud DR

Datrium claims system recovery can happen on VMware Cloud within minutes from the snapshots stored in Amazon S3, because it requires no conversion from a different virtual machine or cloud format. Unlike some backup products, Datrium does not convert VMs from VMware’s format to Amazon’s format and can boot VMs directly from the Amazon data store.

“The challenge with a backup-only product is that it takes days if you want to rehydrate the data and copy the data into a primary storage system,” Reddy said.

Although the “instant RTO” that Datrium claims to provide may not be important to all VMware users, reducing recovery time is generally a high priority, especially to combat ransomware attacks. Datrium commissioned a third party to conduct a survey of 395 IT professionals, and about half said they experienced a DR event in the last 24 months. Ransomware was the leading cause, hitting 36% of those who reported a DR event, followed by power outages (26%).

The Orange County Transportation Authority (OCTA) information systems department spent a weekend recovering from a zero-day malware exploit that hit nearly three years ago on a Thursday afternoon. The malware came in through a contractor’s VPN connection and took out more than 85 servers, according to Michael Beerer, a senior section manager for online system and network administration of OCTA’s information systems department.

Beerer said the information systems team restored critical applications by Friday evening and the rest by Sunday afternoon. But OCTA now wants to recover more quickly if a disaster should happen again, he said.

OCTA is now building out a new data center with Datrium DVX storage for its VMware VMs and possibly Red Hat KVM in the future. Beerer said DVX provides an edge in performance and cost over alternatives he considered. Because DVX disaggregates storage and compute nodes, OCTA can increase storage capacity without having to also add compute resources, he said.

Datrium cloud DR advantages

Beerer said the addition of Datrium DRaaS would make sense because OCTA can manage it from the same DVX interface. Datrium’s deduplication, compression and transmission of only changed data blocks would also eliminate the need for a pricy “big, fat pipe” and reduce cloud storage requirements and costs over other options, he said. Plus, Datrium facilitates application consistency by grouping applications into one service and taking backups at similar times before moving data to the cloud, Beerer said.

Datrium’s “Instant RTO” is not critical for OCTA. Beerer said anything that can speed the recovery process is interesting, but users also need to weigh that benefit against any potential additional costs for storage and bandwidth.

“There are customers where a second or two of downtime can mean thousands of dollars. We’re not in that situation. We’re not a financial company,” Beerer said. He noted that OCTA would need to get critical servers up and running in less than 24 hours.

Reddy said Datrium offers two cost models: a low-cost option with a 60-minute window and a “slightly more expensive” option in which at least a few VMware servers are always on standby.

Pricing for Datrium DRaaS starts at $23,000 per year, with support for 100 hours of VMware Cloud on-demand hosts for testing, 5 TB of S3 capacity for deduplicated and encrypted snapshots, and up to 1 TB per year of cloud egress. Pricing was unavailable for the upcoming DRaaS Connect options.

Other cloud DR options

Jeff Kato, a senior storage analyst at Taneja Group, said the new Datrium options would open up to all VMware customers a low-cost DRaaS offering that requires no capital expense. He said most vendors that offer DR from their on-premises systems to the cloud force customers to buy their primary storage.

George Crump, president and founder of Storage Switzerland, said data protection vendors such as Commvault, Druva, Veeam, Veritas and Zerto also can do some form of recovery in the cloud, but it’s “not as seamless as you might want it to be.”

“Datrium has gone so far as to converge primary storage with data protection and backup software,” Crump said. “They have a very good automation engine that allows customers to essentially draw their disaster recovery plan. They use VMware Cloud on Amazon, so the customer doesn’t have to go through any conversion process. And they’ve solved the riddle of: ‘How do you store data in S3 but recover on high-performance storage?’ “

Scott Sinclair, a senior analyst at Enterprise Strategy Group, said using cloud resources for backup and DR often means either expensive, high-performance storage or lower cost S3 storage that requires a time-consuming migration to get data out of it.

“The Datrium architecture is really interesting because of how they’re able to essentially still let you use the lower cost tier but make the storage seem very high performance once you start populating it,” Sinclair said.

Go to Original Article
Author:

Recovering from ransomware soars to the top of DR concerns

The rise of ransomware has had a significant effect on modern disaster recovery, shaping the way we protect data and plan a recovery. It does not bring the same physical destruction of a natural disaster, but the effects within an organization — and on its reputation — can be lasting.

It’s no wonder that recovering from ransomware has become such a priority in recent years.

It’s hard to imagine a time when ransomware wasn’t a threat, but while cyberattacks date back as far as the late 1980s, ransomware in particular has had a relatively recent rise in prominence. Ransomware is a type of malware attack that can be carried out in a number of ways, but generally the “ransom” part of the name comes from one of the ways attackers hope to profit from it. The victim’s data is locked, often behind encryption, and held for ransom until the attacker is paid. Assuming the attacker is telling the truth, the data will be decrypted and returned. Again, this assumes that the anonymous person or group that just stole your data is being honest.

“Just pay the ransom” is rarely the first piece of advice an expert will offer. Not only do you not know if payment will actually result in your computer being unlocked, but developments in backup and recovery have made recovering from ransomware without paying the attacker possible. While this method of cyberattack seems specially designed to make victims panic and pay up, doing so does not guarantee you’ll get your data back or won’t be asked for more money.

Disaster recovery has changed significantly in the 20 years TechTarget has been covering technology news, but the rapid rise of ransomware to the top of the potential disaster pyramid is one of the more remarkable changes to occur. According to a U.S. government report, by 2016 4,000 ransomware attacks were occurring daily. This was a 300% increase over the previous year. Ransomware recovery has changed the disaster recovery model, and it won’t be going away any time soon. In this brief retrospective, take a look back at the major attacks that made headlines, evolving advice and warnings regarding ransomware, and how organizations are fighting back.

In the news

The appropriately named WannaCry ransomware attack began spreading in May 2017, using an exploit leaked from the National Security Agency targeting Windows computers. WannaCry is a worm, which means that it can spread without participation from the victims, unlike phishing attacks, which require action from the recipient to spread widely.

Ransomware recovery has changed the disaster recovery model, and it won’t be going away any time soon.

How big was the WannaCry attack? Affecting computers in as many as 150 countries, WannaCry is estimated to have caused hundreds of millions of dollars in damages. According to cyber risk modeling company Cyence, the total costs associated with the attack could be as high as $4 billion.

Rather than the price of the ransom itself, the biggest issue companies face is the cost of being down. Because so many organizations were infected with the WannaCry virus, news spread that those who paid the ransom were never given the decryption key, so most victims did not pay. However, many took a financial hit from the downtime the attack caused. Another major attack in 2017, NotPetya, cost Danish shipping giant A.P. Moller-Maersk hundreds of millions of dollars. And that’s just one victim.

In 2018, the city of Atlanta’s recovery from ransomware ended up costing more than $5 million, and shut down several city departments for five days. In the Matanuska-Susitna borough of Alaska in 2018, 120 of 150 servers were affected by ransomware, and the government workers resorted to using typewriters to stay operational. Whether it is on a global or local scale, the consequences of ransomware are clear.

Ransomware attacks
Ransomware attacks had a meteoric rise in 2016.

Taking center stage

Looking back, the massive increase in ransomware attacks between 2015 and 2016 signaled when ransomware really began to take its place at the head of the data threat pack. Experts not only began emphasizing the importance of backup and data protection against attacks, but planning for future potential recoveries. Depending on your DR strategy, recovering from ransomware could fit into your current plan, or you might have to start considering an overhaul.

By 2017, the ransomware threat was impossible to ignore. According to a 2018 Verizon Data Breach Report, 39% of malware attacks carried out in 2017 were ransomware, and ransomware had soared from being the fifth most common type of malware to number one.

Verizon malware report
According to the 2018 Verizon Data Breach Investigations Report, ransomware was the most prevalent type of malware attack in 2017.

Ransomware was not only becoming more prominent, but more sophisticated as well. Best practices for DR highlighted preparation for ransomware, and an emphasis on IT resiliency entered backup and recovery discussions. Protecting against ransomware became less about wondering what would happen if your organization was attacked, and more about what you would do when your organization was attacked. Ransomware recovery planning wasn’t just a good idea, it was a priority.

As a result of the recent epidemic, more organizations appear to be considering disaster recovery planning in general. As unthinkable as it may seem, many organizations have been reluctant to invest in disaster recovery, viewing it as something they might need eventually. This mindset is dangerous, and results in many companies not having a recovery plan in place until it’s too late.

Bouncing back

While ransomware attacks may feel like an inevitability — which is how companies should prepare — that doesn’t mean the end is nigh. Recovering from ransomware is possible, and with the right amount of preparation and help, it can be done.

The modern backup market is evolving in such a way that downtime is considered practically unacceptable, which bodes well for ransomware recovery. Having frequent backups available is a major element of recovering, and taking advantage of vendor offerings can give you a boost when it comes to frequent, secure backups.

Vendors such as Reduxio, Nasuni and Carbonite have developed tools aimed at ransomware recovery, and can have you back up and running without significant data loss within hours. Whether the trick is backdating, snapshots, cloud-based backup and recovery, or server-level restores, numerous tools out there can help with recovery efforts. Other vendors working in this space include Acronis, Asigra, Barracuda, Commvault, Datto, Infrascale, Quorum, Unitrends and Zerto.

Along with a wider array of tech options, more information about ransomware is available than in the past. This is particularly helpful with ransomware attacks, because the attacks in part rely on the victims unwittingly participating. Whether you’re looking for tips on protecting against attacks or recovering after the fact, a wealth of information is available.

The widespread nature of ransomware is alarming, but also provides first-hand accounts of what happened and what was done to recover after the attack. You may not know when ransomware is going to strike, but recovery is no longer a mystery.

Go to Original Article
Author:

A data replication strategy for all your disaster recovery needs

Meeting an organization’s disaster recovery challenges requires addressing problems from several angles based on specific recovery point and recovery time objectives. Today’s tight RTO and RPO expectations mean almost no data gets lost and no downtime.

To meet those expectations, businesses must move beyond backup and consider a data replication strategy. Modern replication products offer more than just a rapid disaster recovery copy of data, though. They can help with cloud migration, using the cloud as a DR site and even solving copy data challenges.

Replication software comes in two forms. One is integrated into a storage system, and the other is bought separately. Both have their strengths and weaknesses.

An integrated data replication strategy

The integrated form of replication has a few advantages. It’s often bundled at no charge or is relatively inexpensive. Of course, nothing in life is really free. The customer pays extra for the storage hardware in order to get the “free” software. In addition, at-scale, storage-based replication is relatively easy to manage. Most storage system replication works at a volume level, so one job replicates the entire volume, even if there are a thousand virtual machines on it. And finally, storage system-based replication is often backup-controlled, meaning the replication job can be integrated and managed by backup software.

There are, however, problems with a storage system-based data replication strategy. First, it’s specific to that storage system. Consequently, since most data centers use multiple storage systems from different vendors, they must also manage multiple replication products. Second, the advantage of replicating entire volumes can be a disadvantage, because some data centers may not want to replicate every application on a volume. Third, most storage system replication inadequately supports the cloud.

Stand-alone replication

IT typically installs stand-alone replication software on each host it’s protecting or implements it into the cluster in a hypervisor environment. Flexibility is among software-based replication’s advantages. The same software can replicate from any hardware platform to any other hardware platform, letting IT mix and match source and target storage devices. The second advantage is that software-based replication can be more granular about what’s replicated and how frequently replication occurs. And the third advantage is that most software-based replication offers excellent cloud support.

While backup software has improved significantly, tight RPOs and RTOs mean most organizations will need replication as well.

At a minimum, the cloud is used as a DR target for data, but it’s also used as an entire disaster recovery site, not just a copy. This means there can be instantiate virtual machines, using cloud compute in addition to cloud storage. Some approaches go further with cloud support, allowing replication across multiple clouds or from the cloud back to the original data center.

The primary downside of a stand-alone data replication strategy is it must be purchased, because it isn’t bundled with storage hardware. Its granularity also means dozens, if not hundreds of jobs, must be managed, although several stand-alone data replication products have added the ability to group jobs by type. Finally, there isn’t wide support from backup software vendors for these products, so any integration is a manual process, requiring custom scripts.

Modern replication features

Modern replication software should support the cloud and support it well. This requirement draws a line of suspicion around storage systems with built-in replication, because cloud support is generally so weak. Replication software should have the ability to replicate data to any cloud and use that cloud to keep a DR copy of that data. It should also let IT start up application instances in the cloud, potentially completely replacing an organization’s DR site. Last, the software should support multi-cloud replication to ensure both on-premises and cloud-based applications are protected.

Another feature to look for in modern replication is integration into data protection software. This capability can take two forms: The software can manage the replication process on the storage system, or the data protection software could provide replication. Several leading data protection products can manage snapshots and replication functions on other vendors’ storage systems. Doing so eliminates some of the concern around running several different storage system replication products.

Data protection software that integrates replication can either be traditional backup software with an added replication function or traditional replication software with a file history capability, potentially eliminating the need for backup software. It’s important for IT to make sure the capabilities of any combined product meets all backup and replication needs.

How to make the replication decision

The increased expectation of rapid recovery with almost no data loss is something everyone in IT will have to address. While backup software has improved significantly, tight RPOs and RTOs mean most organizations will need replication as well. The pros and cons of both an integrated and stand-alone data replication strategy hinge on the environment in which they’re deployed.

Each IT shop must decide which type of replication best meets its current needs. At the same time, IT planners must figure out how that new data replication product will integrate with existing storage hardware and future initiatives like the cloud.

Frost Science Museum IT DR planning braced for worst, survived Irma

When you open a large public facility right on the water in Miami, a good disaster recovery setup is an essential task for an IT team. Hurricane Irma’s assault on Florida in September 2017 made that clear to the Phillip and Patricia Frost Museum of Science team.

The expected Category 5 hurricane moving in on Florida had the new Frost Science Museum square in its sights. Irma turned out to be less threatening to Miami than feared, and the then-4-month-old building suffered no major damage. Still, the museum’s vice president of technology said he felt prepared for the worst with his IT DR planning.

When preparing to open the museum on a 250,000-square-foot location on the Miami waterfront, technology chief Brooks Weisblat installed a new Dell EMC SAN in a fully redundant data center and set up a colocation site in Atlanta as part of its disaster recovery plan. The downgraded Category 4 hurricane dumped water into the building, but did no serious damage and caused no downtime.

Frost Science Museum's Brooks WeisblatBrooks Weisblat

The new Frost Science Museum building features three diesel generators and redundant power, including 20 minutes of backup power in the battery room that should provide enough juice until the backup generators come online. While much of southern Florida lost power during Irma, the museum did not.

“We’re sitting right on the water. It was supposed to be a major hurricane coming straight through Miami. But six hours before hitting, it veered off, so it wasn’t a direct hit,” Weisblat said. “We have two weather stations on the building, and we recorded force winds of 90 to 95 miles per hour. It could have been 190 mile-per-hour winds, and that would have been a different story.”

Advance warning of the hurricane prompted the museum’s team to bolster its IT DR planning.

“The hurricane moved us to get all of our backups in order,” Weisblat said. “Opening the building was intensive. We had backups internally, but we didn’t have off-site backups yet. It pushed us to get a colocated data center in Atlanta when the hurricane warnings came about a week before. At least we had a lot of advance notice for this one. Except for some water here and there, the museum did well.”

The Frost Science Museum raised $330 million in funding to build the new center in downtown Miami, closing its Coconut Grove site in August 2015. Museum organizers said they hoped to attract 750,000 visitors in the first year at the new site. From its May opening through Oct. 31, more than 525,000 people visited the museum.

Shifting to SAN, all-flash

When moving, Frost Science installed a dual-controller Dell EMC SC9000 — formerly Compellent — all-flash array, with 112 TB of capacity connected to 10 Dell EMC PowerEdge servers virtualized with VMware. As part of its IT DR planning, the museum uses Veeam Software to back up virtual machines to a Dell PowerEdge R530 server, with 40 TB of hard disk drive storage on site, and it replicates those backups to another PowerEdge server in the Atlanta location.

The hurricane moved us to get all of our backups in order.
Brooks Weisblatvice president of technology, Frost Science Museum

“If something happens at this site, we’re able to launch a limited number of VMs to power finance, ticketing and reporting,” Weisblat said. “We can control those servers out of Atlanta if we’re unable to get into the building.”

Before opening the new building, Weisblat’s team migrated all VMs between the old and new sites. The process took three weeks. “We had to take down services, copy them to drives a few miles away, then bring those into the new environment and do an import into a new VM cluster,” he said.

The data center sits on the third floor of the new building, 60 feet above sea level. It takes up 16 full cabinets, plus eight racks for networking, Weisblat said.

Frost Science Museum had no SAN in the old building. Its IT ran on 23 servers. Weisblat said he migrated the stand-alone servers into the VMware cluster on the Compellent array before moving. “That way, when the new system came online, it would be easy to move those servers over as files, and we would not have to do migrations into VMware in the new building during the crush time for our opening,” he said.

The Dell EMC SAN runs all critical applications, including the customer relationship management system, exhibit content management, property management system software, the museum website, online ticketing and building security management systems. The security system controls electricity, lights, solar power, centralized antivirus deployments and network access control. “Everything is powered off this one system,” Weisblat said.

The SAN has two Brocade — now Broadcom — Fibre Channel switches for redundancy. “We can unplug hosts; everything keeps running,” Weisblat said. “We can unplug one of the storage arrays, and everything keeps running. The top-of-rack 10-gig [Extreme Avaya Ethernet] switches are also fully redundant. We can lose one of those.”

He said since installing the new array, one solid-state drive went out. “The SSD sent us an alert, and Dell had parts to us in two hours. Before I knew something was wrong, they contacted me.”

Whether it’s a failed SSD or an impending hurricane, early alerts and IT DR planning certainly help when dealing with disasters.

Buffaloes and the Cloud: Students turn to tech to save poor farming families – Asia News Center

Say the word “disaster” and what comes to mind? An earthquake, a drought, a flood, a tsunami, a hurricane? These are big and brutish events. They grab headlines, inspire people to donate, and trigger international relief efforts.

But what about the many micro-disasters that can, at any time, befall poor families across the developing world? For those who live on a perpetual economic knife edge, even a small misfortune or an unexpected turn of events can devastate their hopes and dreams.

Let’s turn to Thimi, a tiny village in the ancient valley of Bhaktapur in Nepal –  a nation that sits in the shadow of the Himalayas and is among the world’s poorest. An overwhelming majority of its 30 million people rely on farming to subsist – often on fragmented, hilly and marginal land where weather and other conditions are subject to extremes. In this rural society, a family typically measures its wealth in the number of animals it keeps.

For years, Rajesh Ghimire and his wife, Sharadha, worked hard to build up a modest herd of 45 cows, goats, and buffaloes. The farm was generating enough income to raise their two children, support four other relatives, and even pay six workers to help out. The Ghimeres had their eyes fixed on better times ahead, and were saving to send their daughter, Ekta, to medical school.

Then, their own micro-disaster struck. A series of heatwaves triggered an outbreak of the disease, anthrax. Almost half of their animals were wiped out and, with that, most of their dreams. The money that had been put away for Ekta’s studies had to be used to save the farm. Seven years later, the family is still trying to claw back what it lost.

Humanitarian company uses Dynamics 365 for Talent for quicker deployment

Whether it’s responding to a natural disaster or helping a developing country improve its education system or water quality, international development company Chemonics needs to build out specialized business processes on the fly. That’s how it keeps more than 60 humanitarian projects around the world moving, despite each one having its own technological needs that are dependent on size, scope and location.

Roughly three years ago, the Washington, D.C.-based company began looking at business applications that could simplify the HR process of finding and hiring the necessary talent needed for their distinct projects, ultimately settling on Microsoft’s Dynamics 365 last October. But Chemonics was still longing for more HR capabilities, like onboarding and contract management, and it was looking at third-party tools to help fill the holes when Microsoft told them about a new feature coming down the line: Dynamics 365 for Talent.

The new Dynamics 365 feature, which was made generally available on Aug. 1, helps streamline routine tasks and automates staffing processes.

“Essentially, we build a brand-new company of anywhere from 15 to 20 people, to 400 to 500 people,” said Eric Reading, executive vice president at Chemonics. “Our business process and the way we organize ourselves needs to be very flexible and oriented around the rapidly changing nature of the geographic and organizational layouts of our company.”

‘We can work in real time’

Founded in 1975, Chemonics has done humanitarian work all over the globe, including current projects in Afghanistan helping with sustainable agriculture and literacy, policy reform in Jordan and health services in Angola, as well as dozens more. The process calls for a local office to be set up in the corresponding region, with recruiting and hiring of talent both worldwide and local to that region.

“We have roughly 4,500 staff around the world, with the smallest office being a half dozen staff and the largest around 400 people,” Reading said. “It’s a pretty dramatic range of scale we have to work in. A lot of those systems and processes we used were designed during a time when we used telex machines. Things were manual or with little automation due to the geographic separation.”

The growth of cloud hosting allowed Chemonics to think more modernly about its technology, as internet infrastructure can be spotty in some of the developing nations in which it works.

It took us to a place where it was possible to have our whole global organization operating on a single framework for IT and business process.
Eric Readingexecutive vice president, Chemonics

“It took us to a place where it was possible to have our whole global organization operating on a single framework for IT and business process,” Reading said. “We can work in real time and collaborate.”

Chemonics researched roughly a dozen different software providers, ultimately narrowing the list to four, then to two — Oracle and Dynamics 365 — before settling on Dynamics for its UI consistency, simplicity and licensing structure.

“The consistency of experience across different parts of the interface was valuable,” Reading said. “There are a lot of elements of business that have to be done a certain way because we’re a government contractor and work on programs that need to comply in a lot of different legal departments. It allowed us to do more at a deeper level without having to completely customize everything.”

And while Chemonics’ first iteration of Dynamics helped with collaboration and consistency among its global projects, it still left some features to be desired in the HR department.

“At the time, there was an incompleteness of the HR offering, and it didn’t satisfy our needs in that area,” Reading said. “We were evaluating options on what do we append in to get that resource functionality. We talked with Microsoft about it, and they asked us to give them a little bit of time to see what was coming down the road.”

Reading said Chemonics was one of the first Microsoft customers to set up Dynamics 365 for Talent for a project in the Dominican Republic.

“After [implementing Dynamics 365 for Talent], we stood up the Dominican Republic office in a 21-day period,” Reading said, adding that the typical goal is 60 days.

A screenshot showing the different steps of the onboarding process at Chemonics through Dynamics 365 for Talent. The new feature was generally released on Aug. 1.
A screenshot showing the different steps of the onboarding process at Chemonics through Dynamics 365 for Talent. The new feature was generally released on Aug. 1.

Integrating with LinkedIn

Dynamics 365 for Talent was one of two major upgrades that Microsoft brought to its business application earlier this year, with the other being bringing together LinkedIn Sales Navigator and Dynamics 365 for Sales, which allows Dynamics customers to mine LinkedIn’s 500 million members for additional sales leads.

Integrating LinkedIn’s vast amount of professional data into Dynamics also helps with the hiring process that Chemonics needed.

“The new offerings focus on the hiring process, the employee onboarding process and the underlying core needs of HR,” said Mike Ehrenberg, chief strategist for Microsoft. “We’ve had these abilities before, but it’s much more modern and richer now.”

Reading said Chemonics uses LinkedIn as one of the first places to find specialized and specific talent.

“We may need to find an expert in methodology of literacy that can work in a particular language,” Reading said. “Finding that specialized skill set and being able to link it from LinkedIn to the Talent offering is exciting.”

Prior to Dynamics 365 for Talent, the hiring process for Chemonics’ different projects was manual — and the results varied.

“We often had lots of one-page Word documents that may or may not get reused,” Reading said. “We’d have checklists and other manual management work that had a fair level of inconsistency with it.”

Licensing easy to work with

The final aspect that drew Chemonics toward Dynamics 365 was the malleable licensing Microsoft offered, with both an overarching license for management and administrators and a team member license for employees with a simpler routine.

“Our organization doesn’t break down neatly among traditional roles,” Reading said. “The licensing made it easier to manage the process and much more competitive on a pricing standpoint.”

The full use of Dynamics 365 cost $210 per user, per month, with team members’ licenses costing $8 per user, per month to execute basic processes and shared knowledge. There’s also an operations activity license for $50 per user, per month and an operations devices license for $75 per user, per month. Microsoft also offers other cheaper, stripped-down licenses of Dynamics 365, some of which don’t include Dynamics 365 for Talent.