Tag Archives: service

JetStream DR on Cloudian offers MSPs lower-cost DRaaS

A California-based managed service provider has deployed a new disaster-recovery-as-a-service offering that integrates JetStream Software’s DR product with Cloudian’s object storage technology.

Enterprise Networking Solutions Inc. (ENS-Inc), an MSP based in Rancho Cordova, Calif., said it is the first service provider to use the DRaaS product JetStream DR on Cloudian. The DRaaS offering, which targets customers using VMware vSphere, continuously replicates data from customers’ VMs to Cloudian’s object store, housed in an off-site recovery environment.

ENS-Inc has used JetStream DR as an MSP partner for about two years, having beta-tested the product in 2018. The MSP has also used Cloudian’s HyperStore platform as an archival storage tier for three years.

Paul Smitham, president at ENS-Inc, said combining the technologies for DRaaS can reduce costs by 60%. MSPs can pass the cost savings along to customers. 

“DR sometimes is very expensive and, a lot of times, people don’t think they need it because it is so expensive,” Smitham said.

ENS-Inc has provided JetStream DR on Cloudian for about six months, originally on a beta basis. The offering is now in general use. “We have customers on it, and we are starting to get more and more,” Smitham said.

DR sometimes is very expensive and, a lot of times, people don’t think they need it because it is so expensive.
Paul SmithamPresident, ENS-Inc

Additional cost savings stem from the use of JetStream DR, which Smitham said costs much less than competitive products from vendors such as Veeam and Zerto. He noted those products don’t extend to object storage yet.

At the moment, ENS-Inc uses JetStream DR on Cloudian to replicate customers’ data to DR sites in Rancho Cordova and Las Vegas.

Rich Petersen, president and co-founder at JetStream, said feedback from partners such as ENS-Inc has helped prioritize the development of new features. “We have been getting a lot of great guidance,” he said.

Jon Toor, CMO at Cloudian, added that service providers have represented a major slice of the Cloudian’s target market since the company’s launch in 2011. He suggested object storage has found a home in the data protection world.

“Object storage has really emerged as the de facto target for backup software and continuous replication software,” he said. “[DRaaS] is the kind of use case Cloudian was designed for.”

Go to Original Article
Author:

Istio service mesh revamp may ease use, or sow confusion

A new version of the Istio service mesh rolled out this week introduces significant changes to the project’s architecture but leaves key questions about the project’s future direction unanswered.

Istio service mesh version 1.5 introduces Istiod, a monolithic package that combines what had been four separate control plane microservices into one utility. These include a sidecar injector service; the Pilot service, which handled sidecar proxy configuration; Citadel, which provided security functions, including a certificate authority; and Galley, which performed validation.

A fifth set of modules and plugins provided by the Mixer telemetry collection service within the Istio control plane in previous versions will shift to a new set of plugins for the Envoy sidecar with version 1.5.

Istio, an open source project founded by Google, IBM and Lyft, has developed a popular approach to service mesh, a network architecture that collects monitoring data and enforces fine-grained policies in complex microservices environments. It boasts powerful backers that now also include Red Hat, but has yet to achieve the same dominance over the cloud-native software market as Kubernetes container orchestration. In fact, Istio rival Linkerd has been profiting from its competitor’s reputation for cumbersome management for at least a year, and steadily closing Istio’s early lead with support for features such as mutual TLS (mTLS).

Team leads at IBM want to see Istio enjoy the same ubiquity as Kubernetes, and this desire informed the significant changes in Istio 1.5. Removing the Mixer service, which had been associated with performance bottlenecks in earlier versions, is also intended to improve Istio’s control plane performance and boost its appeal to a wider audience.

“We want Istio to be like Kubernetes — ‘boring’ infrastructure for microservices,” said Lin Sun, IBM’s technical lead for Istio. “That’s our high-priority goal for 2020: To be able to move services to service mesh faster without [requiring] as many configuration changes to microservices.”

In the short term, the change is likely to cause tumult in the industry, which is still in the early stages of service mesh adoption, analysts said.

There is irony in [adding] a monolith for a service mesh whose purpose is to discover, connect and do traffic management for microservices.
Brad CasemoreAnalyst, IDC

“There is irony in [adding] a monolith for a service mesh whose purpose is to discover, connect and do traffic management for microservices,” said Brad Casemore, analyst at IDC. “It may cause some folks to say, ‘If they were off on their assumptions about a microservices-based architecture for the service mesh, can I be confident they’ll manage to get it right this time?'”

Brad CasemoreBrad Casemore

It will take until at least version 1.6 for all of Istiod’s features to reach feature parity with the previous microservices architecture, particularly in multi-cluster environments. The newly unified Istiod daemon doesn’t yet support Citadel’s certificate authority or the sidecar injection service. Users at KubeCon who intended to deploy Tiller-less Helm v3 with newer versions of Istio will also have to wait for future releases, as the finer details of Helm v3 support under Istiod have yet to be finalized.

IBM’s Sun said she doesn’t expect many technical hurdles to implementing these features by the next release, but Istio’s core audience is accustomed to the microservices architecture. These early adopters may chafe at the sweeping changes to the platform, particularly if they must wait too long for the new architecture to match the capabilities of earlier versions, Casemore warned.

“Simplified management will appeal to shops whose platform teams are not as adept [as early adopters], but I wonder if it sends a mixed message,” he said.

Another potential challenge for the next few versions of Istio service mesh lies in the transition to the new Envoy-based mechanism for integrating third-party extensions to the project. Documentation for the Mixer adapter conversion process to Envoy plugins is still being developed, Sun said. The success of this transition will dictate whether third-party makers of network infrastructure products such as application delivery and ingress controllers continue to support Istio service mesh or switch their loyalties to a competing alternative, another possible blow to Istio’s market momentum.  

The project also faces broader, longer-term questions about its governance — namely, whether it will be donated to a foundation such as the CNCF by Google, as Kubernetes was.

Istio contributors including IBM’s Sun have put forth a governance proposal to Google that would rework the project’s charter and widen the steering committee to include vendors and users beyond today’s members from Google, IBM and Red Hat. Sun declined to share any further specific details about the charter changes. Donating Istio to a foundation remains non-negotiable for now, she said.

“It’s something we don’t like, but we spent a lot of time within IBM on the new steering charter, and most of our proposals were accepted,” she added.

Go to Original Article
Author:

What Exactly is Azure Dedicated Host?

In this blog post, we’ll become more familiar with a new Azure service called Azure Dedicated Hosts. Microsoft announced the service as preview some time ago and will go general-available with it in the near future.

Microsoft Azure Dedicated Host allows customers to run their virtual machines on a dedicated host not shared with other customers. While in a regular virtual machine scenario different customers or tenants share the same hosts, with Dedicated Host, a customer does no longer share the hardware. The picture below illustrates the setup.

Azure Dedicated Hosts

With a Dedicated Host, Microsoft wants to address customer concerns regarding compliance, security, and regulations, which could come up when running on a shared physical server. In the past, there was only one option to get a dedicated host in Azure. The option was to use very large instances like a D64s v3 VM size. These instances were so large that they consumed one host, and the placement of other VMs was not possible.

To be honest here, with the improvements in machine placement, larger hosts, and with that a much better density, there was no longer a 100% guaranty that the host is still dedicated. Another thing regarding instances is they are extremely expensive, as you can see in the screenshot from the Azure Price Calculator.

Azure price calculator

How to Setup a Dedicated Host in Azure

The setup of a dedicated host is pretty easy. First, you need to create a host group with your preferences for availability, like Availability Zones and Number of Fault Domains. You also need to decide for a Host Region, Group Name, etc.

How To Setup A Dedicated Host In Azure

After you created the host group, you can create a host within the group. Within the current preview, only VM Type Ds3 and Es3 Family are available to choose from. Microsoft will add more options soon.

Create dedicated host

More Details About Pricing

As you can see in the screenshot, Microsoft added the option to use Azure Hybrid Use Benefits for Dedicated Host. That means you can use your on-prem Windows Server and SQL Server licenses with Software Assurance to reduce your costs in Azure.

Azure Hybrid Use Benefits pricing

Azure Dedicated Host also gives you more insides into the host like:

  • The underlying hardware infrastructure (host type)
  • Processor brand, capabilities, and more
  • Number of cores
  • Type and size of the Azure Virtual Machines you want to deploy

An Azure Customer can control all host-level platform maintenance initiated by Azure, like OS updates. An Azure Dedicated Host gives you the option to schedule maintenance windows within 35 days where these updates are applied to your host system. During this self-maintenance window, customers can apply maintenance to hosts at their own convenience.

So looking a bit deeper in that service, Azure becomes more like a traditional hosting provider who gives a customer a very dynamic platform.

The following screenshot shows the current pricing for a Dedicated Host.

Azure Dedicated Host pricing details

Following virtual machine types can be run on a dedicated host.

Virtual Machines on a Dedicated Host

Currently, you have a soft limit from 3000 vCPUs for a dedicated host per region. That limit can be enhanced by submitting a support ticket.

When Would I Use A Dedicated Host?

In most cases, you would choose a dedicated host because of compliance reasons. You may not want to share a host with other customers. Another reason could be that you want a guaranteed CPU architecture and type. If you place your VMs on the same host, then it is guaranteed that it will have the same architecture.

Further Reading

Microsoft already published a lot of documentation and blogs about the topic so you can deepen your knowledge about Dedicated Host.

Resource #1: Announcement Blog and FAQ 

Resource #2: Product Page 

Resource #3: Introduction Video – Azure Friday “An introduction to Azure Dedicated Hosts | Azure Friday”

Go to Original Article
Author: Florian Klaffenbach

Google Cloud security gets boost with Secret Manager

Google has added a new managed service called Secret Manager to its cloud platform amid a climate increasingly marked by high-profile data breaches and exposures.

Secret Manager, now in beta, builds on existing Google Cloud security services by providing a central place to store and manage sensitive data such as API keys or passwords.

The system employs the principle of least privilege, meaning only a project’s owners can look at secrets without explicitly granted permissions, Google said in a blog post. Secret Manager works in conjunction with the Cloud Audit Logging service to create access audit trails. These data sets can then be moved into anomaly detection systems to check for breaches and other abnormalities.

All data is encrypted in transit and at rest with AES-256-level encryption keys. Google plans to add support for customer-managed keys later on, according to the blog.

A secrets manager … is really no different than a database, but just with more audit logs and access checking.
Scott PiperAWS security consultant, Summit Route

Google Cloud customers have been able to manage sensitive data prior to now with Berglas, an open source project that runs from the command line, whereas Secret Manager adds a layer of abstraction through a set of APIs.

Berglas can be used on its own going forward, as well as directly through Secret Manager beginning with the recently released 0.5.0 version, Google said. Google also offers a migration tool for moving sensitive data out of Berglas and into Secret Manager.

Secret Manager builds on the existing Google Cloud security lineup, which also includes Key Management Service, Cloud Security Command Center and VPC Service Controls.

With Secret Manager, Google has introduced its own take on products such as HashiCorp Vault and AWS Secrets Manager, said Scott Piper, an AWS security consultant at Summit Route in Salt Lake City.

Scott Piper, an AWS security consultant at Summit Route Scott Piper

A key management service is used to keep an encryption key and perform encryption operations, Piper said. “So, you send them data, and they encrypt them. A secrets manager, on the other hand, is really no different than a database, but just with more audit logs and access checking. You request a piece of data from it — such as your database password — and it returns it back to you. The purpose of these solutions is to avoid keeping secrets in code.”

Doug Cahill, an analyst at Enterprise Strategy GroupDoug Cahill

Indeed, Google’s Key Management Service targets two different audiences within enterprise IT, said Doug Cahill, an analyst at Enterprise Strategy Group in Milford, Mass.

“The former is focused on managing the lifecycle of data encryption keys, while the latter is focused on securing the secrets employed to securely operate API-driven infrastructure-as-code environments,” Cahill said.

As such, data security and privacy professionals and compliance officers are the likely consumers of a key management offering, whereas secret management services are targeted toward DevOps, Cahill added.

Meanwhile, it is surprising that the Google Cloud security portfolio didn’t already have something like Secret Manager, but AWS only released its own version in mid-2018, Piper said. Microsoft released Azure Key Vault in 2015 and has positioned it as appropriate for managing both encryption keys and other types of sensitive data.

Pricing for Secret Manager during the beta period is calculated two ways: Google charges $0.03 per 10,000 operations, and $0.06 per active secret version per regional replica, per month.

Go to Original Article
Author:

Citrix’s performance analytics service gets granular

Citrix introduced an analytics service to help IT professionals better identify the cause of slow application performance within its Virtual Apps and Desktops platform.

The company announced the general availability of the service, called Citrix Analytics for Performance, at its Citrix Summit, an event for the company’s business partners, in Orlando on Monday. The service carries an additional cost.

Steve Wilson, the company’s vice president of product for workspace ecosystem and analytics, said many IT admins must deal with performance problems as part of the nature of distributed applications. When they receive a call from workers complaining about performance, he said, it’s hard to determine the root cause — be it a capacity issue, a network problem or an issue with the employee’s device.

Performance, he said, is a frequent pain point for employees, especially remote and international workers.

“There are huge challenges that, from a performance perspective, are really hard to understand,” he said, adding that the tools available to IT professionals have not been ideal in identifying issues. “It’s all been very technical, very down in the weeds … it’s been hard to understand what [users] are seeing and how to make that actionable.”

Part of the problem, according to Wilson, is that traditional performance-measuring tools focus on server infrastructure. Keeping track of such metrics is important, he said, but they do not tell the whole story.

“Often, what [IT professionals] got was the aggregate view; it wasn’t personalized,” he said.

When the aggregate performance of the IT infrastructure is “good,” Wilson said, that could mean that half an organization’s users are seeing good performance, a quarter are seeing great performance, but a quarter are experiencing poor performance.

Steve Wilson, vice president of product for workspace ecosystem and analytics, CitrixSteve Wilson

With its performance analytics service, Citrix is offering a more granular picture of performance by providing metrics on individual employees, beyond those of the company as a whole. That measurement, which Citrix calls a user experience or UX score, evaluates such factors as an employee’s machine performance, user logon time, network latency and network stability.

“With this tool, as a system administrator, you can come in and see the entire population,” Wilson said. “It starts with the top-level experience score, but you can very quickly break that down [to personal performance].”

Wilson said IT admins who had tested the product said this information helped them address performance issues more expeditiously.

“The feedback we’ve gotten is that they’ve been able to very quickly get to root causes,” he said. “They’ve been able to drill down in a way that’s easy to understand.”

A proactive approach

Eric Klein, analyst, VDC Research GroupEric Klein

Eric Klein, analyst at VDC Research Group Inc., said the service represents a more proactive approach to performance problems, as opposed to identifying issues through remote access of an employee’s computer.

“If something starts to degrade from a performance perspective — like an app not behaving or slowing down — you can identify problems before users become frustrated,” he said.

Mark Bowker, senior analyst, Enterprise Strategy GroupMark Bowker

Klein said IT admins would likely welcome any tool that, like this one, could “give time back” to them.

“IT is always being asked to do more with less, though budgets have slowly been growing over the past few years,” he said. “[Administrators] are always looking for tools that will not only automate processes but save time.”

Enterprise Strategy Group senior analyst Mark Bowker said in a press release from Citrix announcing the news that companies must examine user experience to ensure they provide employees with secure and consistent access to needed applications.

IT is always being asked to do more with less.
Eric KleinAnalyst, VDC Research Group

“Key to providing this seamless experience is having continuous visibility into network systems and applications to quickly spot and mitigate issues before they affect productivity,” he said in the release.

Wilson said the performance analytics service was the product of Citrix’s push to the cloud during the past few years. One of the early benefits of that process, he said, has been in the analytics field; the company has been able to apply machine learning to the data it has garnered and derive insights from it.

“We do see a broad opportunity around analytics,” he said. “That’s something you’ll see more and more of from us.”

Go to Original Article
Author:

Box vs. Dropbox outages in 2019

In this infographic, we present a timeline of significant service disruptions in 2019 for Box vs. Dropbox.

Box vs. Dropbox outages in 2019

Cloud storage providers Box and Dropbox self-report service disruptions throughout each year. In 2019, Dropbox posted publicly about eight incidents; Box listed more than 50. But the numbers don’t necessarily provide an apples-to-apples comparison, because each company gets to choose which incidents to disclose.

This infographic includes significant incidents that prevented users from accessing Box or Dropbox in 2019, or at least from uploading and downloading documents. It excludes outages that appeared to last 10 minutes or fewer, as well as incidents labeled as having only “minor” or “medium” impact.

To view the full list of 2019 incidents for Box vs. Dropbox, visit status.box.com and status.dropbox.com

Go to Original Article
Author:

Cisco 2020: Challenges, prospects shape the new year

Cisco finished 2019 with a blitz of announcements that recast the company’s service provider business. Instead of providing just integrated hardware and software, Cisco became a supplier of components for open gear.

Cisco enters the new decade with rearchitected silicon tailored for white box routers favored by cloud providers and other organizations with hyperscale data centers. To add punch to its new Silicon One chipset, Cisco plans to offer high-speed integrated optics from Acacia Communications. Cisco expects to complete its $2.6 billion acquisition of Acacia in 2020.

Cisco is aiming its silicon-optics combo at Broadcom. The chipmaker has been the only significant silicon supplier for white box routers and switches built on specifications from the Open Compute Project. The specialty hardware has become the standard within the mega-scale data centers of cloud providers like AWS, Google and Microsoft; and internet companies like Facebook.

I think the Silicon One announcement was a watershed moment.
Chris AntlitzPrincipal analyst, Technology Business Research Inc.

“I think the Silicon One announcement was a watershed moment,” said Chris Antlitz, principal analyst at Technology Business Research Inc. (TBR).

Cisco designed Silicon One so white box manufacturers could program the hardware platform for any router type. Gear makers like Accton Technology Corporation, Edgecore Networks and Foxconn Technology Group will be able to use the chip in core, aggregation and access routers. Eventually, they could also use it in switches.

Cisco 2020: Silicon One in the 5G market

Cisco is attacking the cloud provider market by addressing its hunger for higher bandwidth and lower latency. At the same time, the vendor will offer its new technology to communication service providers. Their desire for speed and higher performance will grow over the next couple of years as they rearchitect their data centers to deliver 5G wireless services to businesses.

For the 5G market, Cisco could combine Silicon One with low-latency network interface cards from Exablaze, which Cisco plans to acquire by the end of April 2020. The combination could produce exceptionally fast switches and routers to compete with other telco suppliers, including Ericsson, Juniper Networks, Nokia and Huawei. Startups are also targeting the market with innovative routing architectures.

“Such a move could give Cisco an edge,” said Tom Nolle, president of networking consultancy CIMI Corp., in a recent blog. “If you combine a low-latency network card with the low-latency Silicon One chip, you might have a whole new class of network device.”

Cisco 2020: Trouble with the enterprise

Cisco will launch its repositioned service provider business, while contending with the broader problem of declining revenues. Cisco could have difficulty reversing that trend, while also addressing customer unhappiness with the high price of its next-generation networking architecture for enterprise data centers. 

“I do think 2020 is likely to be an especially challenging year for Cisco,” said John Burke, an analyst at Nemertes Research. “The cost of getting new goodies is far too high.”

Burke said he had spoken to several people in the last few months who had dropped Cisco gear from their networks to avoid the expense. At the same time, companies have reported using open source network automation tools in place of Cisco software to lower costs.

Cisco software deemed especially expensive include its Application Centric Infrastructure (ACI) and DNA Center, Burke said. ACI and DNA Center are at the heart of Cisco’s modernized approach to the data center and campus network, respectively.

Both offer significant improvements over Cisco’s older network architectures. But they require businesses to purchase new Cisco hardware and retrain IT staff.

John Mulhall, an independent contractor with 20 years of networking experience, said any new generation of Cisco technology requires extra cost analyses to justify the price.

“As time goes on, a lot of IT shops are going to be a little bit reluctant to just go the standard Cisco route,” he said. “There’s too much competition out there.”

Cisco SD-WAN gets dinged

Besides getting criticized for high prices, Cisco also took a hit in 2019 for the checkered performance of its Viptela software-defined WAN, a centerpiece for connecting campus employees to SaaS and cloud-based applications. In November, Gartner reported that Viptela running on Cisco’s IOS-XE platform had “stability and scaling issues.”

Also, customers who had bought Cisco’s ISR routers during the last few years reported the hardware didn’t have enough throughput to support Viptela, Gartner said.

The problems convinced the analyst firm to drop Cisco from the “leaders” ranking of Gartner’s latest Magic Quadrant for WAN Edge Infrastructure.

Gartner and some industry analysts also knocked Cisco for selling two SD-WAN products — Viptela and Meraki — with separate sales teams and distinct management and hardware platforms.

The approach has made it difficult for customers and resellers to choose the product that best suits their needs, analysts said. Other vendors use a single SD-WAN to address all uses.

“Cisco’s SD-WAN is truly a mixed bag,” said Roy Chua, principal analyst at AvidThink. “In the end, the strategy will need to be clearer.”

Antlitz of TBR was more sanguine about Cisco’s SD-WAN prospects. “We see no reason to believe that Cisco will lose its status as a top-tier SD-WAN provider.”

Go to Original Article
Author:

Logically acquires Carolinas IT in geographic expansion

Logically Inc., a managed service provider based in Portland, Maine, has acquired Carolinas IT, a North Carolina MSP with cloud, security and core IT infrastructure skills.

The deal continues Logically’s geographic expansion. The company launched in April, building upon the 2018 merger of Portland-based Winxnet Inc. and K&R Network Solutions of San Diego. Logically in August added a New York metro area company through its purchase of Sullivan Data Management, an outsourced IT services firm.

Carolinas IT, based in Raleigh, provides a base from which Logically can expand in the region, Logically CEO Christopher Claudio said. “They are a good launching pad,” he said.

Carolinas IT’s security and compliance practice also attracted Logically’s interest. Claudio called security and compliance a growing market and one that will continue to expand, given the challenges organizations face with regulatory obligations and the risk of data breaches.

“You can’t be an MSP without a security and compliance group,” he said.

Carolinas IT’s security services include risk assessment, HIPAA consulting and auditing, security training and penetration testing. The company’s cloud offerings include Office 365 and private hosted cloud services. And its professional services personnel possess certifications from vendors such as Cisco, Citrix, Microsoft, Symantec and VMware.

Christopher ClaudioChristopher Claudio

Claudio cited Carolinas IT’s “depth of talent,” recurring revenue and high client retention rate as some of the business’s favorable attributes.

Mark Cavaliero, Carolinas IT’s president and CEO, will remain at the company for the near term but plans to move on and will not have a long-term leadership role, Claudio said. But Cavaliero will have an advisory role, Claudio added, noting “he has built a great business.”

Logically’s Carolinas IT purchase continues a pattern of companies pulling regional MSPs together to create national service providers. Other examples include Converge Technology Solutions Corp. and Mission.

Go to Original Article
Author:

AWS, Azure and Google peppered with outages in same week

AWS, Microsoft Azure and Google Cloud all experienced service degradations or outages this week, an outcome that suggests customers should accept that cloud outages are a matter of when, not if.

In AWS’s Frankfurt region, EC2, Relational Database Service, CloudFormation and Auto Scaling were all affected Nov. 11, with the issues now resolved, according to AWS’s status page.

Azure DevOps services for Boards, Repos, Pipelines and Test Plans were affected for a few hours in the early hours of Nov. 11, according to its status page. Engineers determined that the problem had to do with identity calls and rebooted access tokens to fix the system, the page states.

Google Cloud said some of its APIs in several U.S. regions were affected, and others experienced problems globally on Nov. 11, according to its status dashboard. Affected APIs included those for Compute Engine, Cloud Storage, BigQuery, Dataflow, Dataproc and Pub/Sub. Those issues were resolved later in the day.

Google Kubernetes Engine also went through some hiccups over the past week, in which nodes in some recently upgraded container clusters resulted in high levels of kernel panics. Known more colloquially as the “blue screen of death” and other terms, kernel panics are conditions wherein a system’s OS can’t recover from an error quickly or easily.

The company rolled out a series of fixes, but as of Nov. 13, the status page for GKE remained in orange status, which indicates a small number of projects are still affected.

AWS, Microsoft and Google have yet to provide the customary post-mortem reports on why the cloud outages occurred, although more information could emerge soon.

Move to cloud means ceding some control

The cloud outages at AWS, Azure and Google this week were far from the worst experienced by customers in recent years. In September 2018, severe weather in Texas caused a power surge that shut down dozens of Azure services for days.

Stephen ElliotStephen Elliot

Cloud providers have aggressively pursued region and zone expansions to help with disaster recovery and high-availability scenarios. But customers must still architect their systems to take advantage of the expanded footprint.

Still, customers have much less control when it comes to public cloud usage, according to Stephen Elliot, an analyst at IDC. That reality requires some operational sophistication.

It’s a myth that outages won’t happen.
Stephen ElliotAnalyst, IDC

“Networks are so interconnected and distributed, lots of partners are involved in making a service perform and available,” he said. “[Enterprises] need a risk mitigation strategy that covers people, process, technologies, SLAs, etc. It’s a myth that outages won’t happen. It could be from weather, a black swan event, security or a technology glitch.”

Jay LymanJay Lyman

This fact underscores why more companies are experimenting with and deploying workloads across hybrid and multi-cloud infrastructures, said Jay Lyman, an analyst at 451 Research. “They either control the infrastructure and downtime with on-premises deployments or spread their bets across multiple public clouds,” he said.

Ultimately, enterprise IT shops can weigh the challenges and costs of running their own infrastructure against public cloud providers and find it difficult to match, said Holger Mueller, an analyst at Constellation Research.

“That said, performance and uptime are validated every day, and should a major and longer public cloud outage happen, it could give pause among less technical board members,” he added.

Go to Original Article
Author:

Datrium opens cloud DR service to all VMware users

Datrium plans to open its new cloud disaster recovery as a service to any VMware vSphere users in 2020, even if they’re not customers of Datrium’s DVX infrastructure software.

Datrium released disaster recovery as a service with VMware Cloud on AWS in September for DVX customers as an alternative to potentially costly professional services or a secondary physical site. DRaaS enables DVX users to spin up protected virtual machines (VMs) on demand in VMware Cloud on AWS in the event of a disaster. Datrium takes care of all of the ordering, billing and support for the cloud DR.

In the first quarter, Datrium plans to add a new Datrium DRaaS Connect for VMware users who deploy vSphere infrastructure on premises and do not use Datrium storage. Datrium DraaS Connect software would deduplicate, compress and encrypt vSphere snapshots and replicate them to Amazon S3 object storage for cloud DR. Users could set backup policies and categorize VMs into protection groups, setting different service-level agreements for each one, Datrium CTO Sazzala Reddy said.

A second Datrium DRaaS Connect offering will enable VMware Cloud users to automatically fail over workloads from one AWS Availability Zone (AZ) to another if an Amazon AZ goes down. Datrium stores deduplicated vSphere snapshots on Amazon S3, and the snapshots replicated to three AZs by default, Datrium chief product officer Brian Biles said.

Speedy cloud DR

Datrium claims system recovery can happen on VMware Cloud within minutes from the snapshots stored in Amazon S3, because it requires no conversion from a different virtual machine or cloud format. Unlike some backup products, Datrium does not convert VMs from VMware’s format to Amazon’s format and can boot VMs directly from the Amazon data store.

“The challenge with a backup-only product is that it takes days if you want to rehydrate the data and copy the data into a primary storage system,” Reddy said.

Although the “instant RTO” that Datrium claims to provide may not be important to all VMware users, reducing recovery time is generally a high priority, especially to combat ransomware attacks. Datrium commissioned a third party to conduct a survey of 395 IT professionals, and about half said they experienced a DR event in the last 24 months. Ransomware was the leading cause, hitting 36% of those who reported a DR event, followed by power outages (26%).

The Orange County Transportation Authority (OCTA) information systems department spent a weekend recovering from a zero-day malware exploit that hit nearly three years ago on a Thursday afternoon. The malware came in through a contractor’s VPN connection and took out more than 85 servers, according to Michael Beerer, a senior section manager for online system and network administration of OCTA’s information systems department.

Beerer said the information systems team restored critical applications by Friday evening and the rest by Sunday afternoon. But OCTA now wants to recover more quickly if a disaster should happen again, he said.

OCTA is now building out a new data center with Datrium DVX storage for its VMware VMs and possibly Red Hat KVM in the future. Beerer said DVX provides an edge in performance and cost over alternatives he considered. Because DVX disaggregates storage and compute nodes, OCTA can increase storage capacity without having to also add compute resources, he said.

Datrium cloud DR advantages

Beerer said the addition of Datrium DRaaS would make sense because OCTA can manage it from the same DVX interface. Datrium’s deduplication, compression and transmission of only changed data blocks would also eliminate the need for a pricy “big, fat pipe” and reduce cloud storage requirements and costs over other options, he said. Plus, Datrium facilitates application consistency by grouping applications into one service and taking backups at similar times before moving data to the cloud, Beerer said.

Datrium’s “Instant RTO” is not critical for OCTA. Beerer said anything that can speed the recovery process is interesting, but users also need to weigh that benefit against any potential additional costs for storage and bandwidth.

“There are customers where a second or two of downtime can mean thousands of dollars. We’re not in that situation. We’re not a financial company,” Beerer said. He noted that OCTA would need to get critical servers up and running in less than 24 hours.

Reddy said Datrium offers two cost models: a low-cost option with a 60-minute window and a “slightly more expensive” option in which at least a few VMware servers are always on standby.

Pricing for Datrium DRaaS starts at $23,000 per year, with support for 100 hours of VMware Cloud on-demand hosts for testing, 5 TB of S3 capacity for deduplicated and encrypted snapshots, and up to 1 TB per year of cloud egress. Pricing was unavailable for the upcoming DRaaS Connect options.

Other cloud DR options

Jeff Kato, a senior storage analyst at Taneja Group, said the new Datrium options would open up to all VMware customers a low-cost DRaaS offering that requires no capital expense. He said most vendors that offer DR from their on-premises systems to the cloud force customers to buy their primary storage.

George Crump, president and founder of Storage Switzerland, said data protection vendors such as Commvault, Druva, Veeam, Veritas and Zerto also can do some form of recovery in the cloud, but it’s “not as seamless as you might want it to be.”

“Datrium has gone so far as to converge primary storage with data protection and backup software,” Crump said. “They have a very good automation engine that allows customers to essentially draw their disaster recovery plan. They use VMware Cloud on Amazon, so the customer doesn’t have to go through any conversion process. And they’ve solved the riddle of: ‘How do you store data in S3 but recover on high-performance storage?’ “

Scott Sinclair, a senior analyst at Enterprise Strategy Group, said using cloud resources for backup and DR often means either expensive, high-performance storage or lower cost S3 storage that requires a time-consuming migration to get data out of it.

“The Datrium architecture is really interesting because of how they’re able to essentially still let you use the lower cost tier but make the storage seem very high performance once you start populating it,” Sinclair said.

Go to Original Article
Author: