Tag Archives: simple

Data protection news 2017: Security issues make headlines

Backup and data security became intertwined in 2017.

WannaCry ransomware and Amazon Simple Storage Service (S3) bucket leaks highlighted data protection news, forcing users and vendors to find new ways to protect data. Other data protection news showed shifts in technology and corporate strategy, such as two old-school backup vendors rolling out converged appliances, a billion-dollar-plus private equity transaction and a maturing vendor’s decision to split its CEO job in two.

WannaCry shines a light on ransomware, data recovery

The WannaCry attack that hit more than 100,000 organizations in 150 countries in May brought ransomware into the public conscience, and it also highlighted the need for proper data protection. As a result, backup vendors now routinely include features designed to help combat ransomware attacks.

That hasn’t stopped the attacks, though. Experts noted that ransomware attacks have become stealthier, and protection against ransomware is now more complicated. That means recovering data from such attacks is getting trickier.

News about WannaCry continued right until the end of the year, as well, when the White House in December officially blamed the North Korean government for the attacks.

See: WannaCry proves the importance of backups

U.S. blames North Korea for WannaCry

Cybersecurity experts expose leaky Amazon S3 buckets

Reports surfaced that corporations, small companies and government agencies have left terabytes of corporate and top-secret data exposed on the internet via misconfigured Amazon S3 storage buckets. Experts claim data was left vulnerable to hacking because access control lists were configured for public access, so any user with an Amazon account could get to the data simply by guessing the name of the bucket.

The list of firms affected by the data protection news included telecommunications giant Verizon, Dow Jones, consulting firm Accenture, World Wrestling Entertainment and U.S. government contractor Booz Allen Hamilton. Many in the IT industry blame end users for failing to click on the proper restricted access level on the buckets, but the publicity still prompted Amazon to build in new features to mitigate the cloud storage security problem.

Amazon added new S3 default encryption that mandates all objects in the bucket must be stored in an encrypted form. The vendor also added permission checks that display a prominent indicator next to each Amazon S3 bucket that is publicly accessible.

Still, reports of more sensitive data left exposed in unsecured storage buckets continued. In November, cybersecurity firm UpGuard reported it was able to access data in storage buckets belonging to the United States Army Intelligence and Security Command and the U.S. Central Command and Pacific Command.

See: Poorly configured Amazon S3 buckets exposed data

Don’t blame Amazon for S3 issues

Dell EMC, Commvault converge backup

Relative backup newcomers Cohesity and Rubrik had a great impact on data protection news in 2017, as stalwarts Dell EMC and Commvault moved down the converged backup path the upstarts have taken.

The Dell EMC Integrated Data Protection Appliance (IDPA) launched at Dell EMC World in May. The purpose-built, preintegrated system converges storage, software, search and analytics in one appliance, providing data protection across applications and platforms with a native, cloud-tiering capability for long-term retention. IDPA includes Data Domain data deduplication technology.

Commvault answered with its HyperScale appliance that puts the vendor’s HyperScale software on a scale-out storage system. The branded Commvault appliance marks a new direction for the vendor, which previously only sold software. Commvault has also partnered with Cisco, which rebrands HyperScale as ScaleProtect on the Cisco Unified Computing System. 

See: Dell EMC integrates backup technologies

Commvault hypes HyperScale

Barracuda becomes a private affair

In a deal that best represents data protection acquisitions in 2017, equity giant Thoma Bravo spent $1.6 billion to acquire publicly held Barracuda Networks and take it private. Barracuda is best known for its security products, but has steadily expanded its backup and disaster recovery platforms in recent years.

The Bravo-Barracuda data protection news highlighted a 2017 trend in the field’s acquisitions. Datto and Spanning also went the private-equity route during the year. Vista Equity Partners acquired Datto and merged it with Autotask, and Dell EMC sold off cloud-to-cloud backup pioneer Spanning to Insight Venture Partners.

See: Bravo takes Barracuda Networks private

Veeam tag-teams CEO role

Veeam Software has grown up so much it now takes two chief executives to run the company. Veeam split its CEO job in 2017, naming Peter McKay and founder Andrei Baronov co-CEOs. Baronov started Veeam in 2006 along with Ratmir Timashev, who served as CEO until 2016 and remains on its board. McKay came to Veeam in 2016 as COO and president.

The division of power calls for McKay to head Veeam’s “go-to-market,” finance and human resources functions, while Baronov handles research and development, market strategy and product management. William Largent, who held the CEO job for 11 months, is now chairman of Veeam’s finance and compensation committees.

See: Veeam shifts management, product strategy

Wanted – 2TB external HDD USB 3 or thunderbolt

Looking for a 2TB external HDD to use as a simple back up, for my backup on my Mac.

I can get one for £65 new off Amazon, but wanted to see if anyone has one going spare on here first.

Cash via BT waiting

Thanks

Location: Higher Walton, just outside of Preston

______________________________________________________
This message is automatically inserted in all classifieds forum threads.
By replying to this thread you agree to abide by the trading rules detailed here.
Please be advised, all buyers and sellers should satisfy themselves that the other party is genuine by providing the following via private conversation to each other after negotiations are complete and prior to dispatching goods and making payment:

  • Landline telephone number. Make a call to check out the area code and number are correct, too
  • Name and address including postcode
  • Valid e-mail address

DO NOT proceed with a deal until you are completely satisfied with all details being correct. It’s in your best interest to check out these details yourself.

Wanted – 2TB external HDD USB 3 or thunderbolt

Looking for a 2TB external HDD to use as a simple back up, for my backup on my Mac.

I can get one for £65 new off Amazon, but wanted to see if anyone has one going spare on here first.

Cash via BT waiting

Thanks

Location: Higher Walton, just outside of Preston

______________________________________________________
This message is automatically inserted in all classifieds forum threads.
By replying to this thread you agree to abide by the trading rules detailed here.
Please be advised, all buyers and sellers should satisfy themselves that the other party is genuine by providing the following via private conversation to each other after negotiations are complete and prior to dispatching goods and making payment:

  • Landline telephone number. Make a call to check out the area code and number are correct, too
  • Name and address including postcode
  • Valid e-mail address

DO NOT proceed with a deal until you are completely satisfied with all details being correct. It’s in your best interest to check out these details yourself.

Windows file server migration tool eases data transfer dread

websites. They’re just objects stored on a file system. So, why is it rarely simple to transfer a bunch of them?

A Windows file server migration should be straightforward. Windows admins have the xcopy disk operating system command, robocopy and the Copy-Item PowerShell cmdlet at their disposal, with a source, destination and even a Recurse parameter to find every item in all subfolders. But unforeseen issues always seem to foul up large file migrations.

IT professionals typically overlook two topics before they perform a large Windows file server migration: Microsoft’s New Technology File System (NTFS)/share permissions and open file handles. A typical scenario illustrates these concepts.

Say you’ve got a 500 GB file server with each employee’s home folder stored on the \FILESRVUsers file share. The IT department plans to map the folder as a network drive, via Group Policy Objects, on every user’s desktop. But when it’s time to move those home folders, things go wrong. It could be that the disk that stores the home folders is direct-attached. In that case, the admins must migrate it to a storage area network or transfer the data to a different logical unit number. All of that important data must move.

In this scenario, data isn’t just cold storage — this data changes every day. It also has a specific permission structure setup: Employees have full rights to their folders, managers have access to their employees’ folders and other miscellaneous NTFS permissions are scattered about. The organization depends on 24/7 availability for this data.

Commercial tools are available to aid in a large Windows server file migration, including Quest’s Secure Copy and Swimage. Microsoft offers the free File Server Migration Toolkit (FSMT), which recreates shares. FSMT is a great alternative to fiddling the switches in robocopy.

Use FSMT for file transfers

FSMT is a Windows feature, so the user installs it via PowerShell on the destination server:

Install-WindowsFeature Migration –ComputerName DESTINATIONSRV

Once FSMT installs, stay on the destination server, and use the SmigDeploy utility to create the deployment shares. The SmigDeploy tool makes the share on the destination server and performs the required setup on the source server. The syntax below assumes that the source server runs Windows Server 2012 and has an AMD64 architecture, while the share to migrate the profiles to is at E:Users.

Microsoft offers the free File Server Migration Toolkit, which recreates shares.

smigdeploy.exe /package /architecture amd64 /os WS12 /path E:Users

Use a similar command if the source server runs an earlier version of Windows Server.

Once this script generates the E:Users folder, create a share for it:

New-SmbShare -Path D:Users -Name Users

Next, copy the deployment folder from the destination server to the source server:

Copy-Item -Path \DESTINATIONSRVUsers -Destination \SOURCESRVc$ -Recurse

Register FSMT on the source server to continue. From the source server, change to the C:UsersSMT_ws12_amd64 folder, and issue the command SmigDeploy.exe to make FSMT ready for use.

To perform the Windows file server migration, go to the destination server, and import the PowerShell snap-in that the feature installed:

Add-Pssnapin microsoft.windows.servermanager.migration

Once the snap-in loads, type Receive-SmigServerData. This sets up the destination server to receive data from the source server once it’s initiated. Go to the source server, and send all of the data to the destination:

Send-SmigServerData -ComputerName DESTINATIONSRV -SourcePath D:Users -DestinationPath C:Users -include all -Recurse

Enter the administrator password if prompted, then watch as the files and folders flow over to the destination server. This FSMT process copies the data and keeps permissions in place during the Windows file server migration.

Ransomware recovery goes beyond data loss for enterprises

Law enforcement has encouraged enterprises to not pay ransom, but experts said the decision isn’t so simple when faced with business downtime during the ransomware recovery process.

The impact of ransomware attacks like WannaCry and NotPetya are still being felt by many organizations and the lasting impact from attempting an in-house ransomware recovery can often be harsh.

Weeks after the NotPetya attacks, FedEx admitted its TNT unit was still relying on manual processes for operations because its ransomware recovery process wasn’t finished. More recently in its Q2 2017 earnings report, Merck described the financial impact of a cyberattack which occurred on June 27th — the day NotPetya began its spread — although the company did not specifically say what kind of attack it was.

The earnings report released on Aug. 28, 2017 said the company was still “in the process of restoring its manufacturing operations.” And, while Merck said in its report it did “not yet know the magnitude of the impact of the disruption” it did alter its financial outlook in order to reflect “the current state of the company’s manufacturing operations as well as its plans to restore those operations and potential costs associated with the remediation efforts.”

Chris Roosenraad, director of product management at Neustar Security Solutions, said the cost of the service disruption will “almost always be more than the ransom demand, if you’re being honest about the costs.” 

“The time of all the IT staff, of the investigators (internal or external), of the PR team and lawyers to prepare a response in case it gets public, etc” Roosenraad told SearchSecurity via email. “And that is all regardless of if you pay or not, you still have to spend those costs.”

Willis McDonald, senior threat researcher at Core Security, said he understood why an organization may choose to pay ransom.

“From a business perspective it can make sense to pay the ransom and be done with the issue even if you have solid backups.  The cost in man hours it takes to coordinate and transfer data from backups can easily surpass the cost of paying the ransom and distributing the decryption key or binary throughout a large organization,” McDonald told SearchSecurity. “The driving force in paying the ransom or not for most businesses really comes down to the cost in wages to recreate or restore operations and data. This is assuming that the attackers can prove that restoring the ransomed data is possible.”

Rick Holland, vice president of strategy at Digital Shadows, said ransomware recovery can be difficult even if an enterprise has an effective disaster recovery program and data backups.

“Backups are a snapshot in time, so there is the potential for data or transactions to be lost between the last backup and the time of ransomware encryption. If a revenue generating application is offline for more than a few hours, the revenue losses could be significantly higher than a ransomware payout expense,” Holland told SearchSecurity. “The release of intellectual property associated with TV shows and box office films could greatly reduce ad revenue and box office revenue. Stolen data that contains [personal health information] or [personally identifiable information] could result in fines from government agencies and class action lawsuits from those impacted by the release.”

Even if you pay a ransom, you have no guarantees that your data will be returned, and that the infiltration isn’t still active in your networks.
Willy Leichtervice president of marketing, Virsec

Jason Kichen, director of cybersecurity services at Versive, said the traditional ransomware recovery process of restoring from data backups is becoming less useful.

“The latest ransomware attacks often target network connected computers, and this often includes servers and systems that serve as backup for critical business data. Off-line backups are key to ensuring business continuity, but this sort of setup is often costlier and has a higher amount of overhead,” Kichen told SearchSecurity. “The level of effort to restore from backups can be significant, and it will often be less expensive in the long run to pay the ransom and re-enable business operations as opposed to not paying the ransom and restoring systems from backup.”

Problems with paying ransom

However, despite the cost equation favoring paying the ransom, experts said this was not as straightforward a ransomware recovery plan as it may appear.

“There should be a calculation as to how likely you are to get a decrypt key if you do pay, and the PR associated with your end decision. For instance, if you do pay, and it becomes known, you may take a PR hit, and you may increase the chances you get targeted again in the future [because] you’re now known to pay ransom,” Roosenraad said. “Or you may not get hit again for a while, because you’ve paid your protection money. That depends on the attackers, and may or may not be something you can figure out before you pay the ransom.”

Weston Henry, lead security analyst at SiteLock, said paying ransom is no guarantee of data retrieval and businesses would do better to have a long-term ransomware recovery plan.

“The short-term cost of remediation and lost revenue may outweigh paying a ransom, but the long-term benefits are a secured network and reliable data restoration,” Henry told SearchSecurity. “There is no guarantee that a business will get its data back if a ransom is paid.”

Willy Leichter, vice president of marketing at Virsec, said paying a ransom is never the solution.  

“Even if you pay a ransom, you have no guarantees that your data will be returned, and that the infiltration isn’t still active in your networks. In fact, you’re tagging yourself as a willing target who will inevitably be hit again,” Leichter told SearchSecurity. “A robust system of backups is by far the best defense against a ransom, but it doesn’t insulate you from potential lawsuits or compliance violations if data is lost. If your networks have been compromised, you have risk.”

Holland said paying ransom could also invalidate insurance policies.

“In a climate where insurance underwriters are adding more rigor to their cyber policies and looking for opportunities to not pay out on a policy, capitulating to a ransom demand could have significant implications,” Holland said. “Additionally, if the word comes out that a business has given in and paid out a ransomware attempt, then it is likely that more attempts will be made in the future.” 

Powered by WPeMatico