Tag Archives: storage

For Sale – Huawei Matebook X Pro – i7, 512GB, MX150

I am selling my Huawei Matebook X Pro, i7, 8GB RAM, 512GB Storage, GPU MX150.
It is in excellent condition and I can’t find any scratches or dings on it anywhere.

I bought it from the Microsoft Store so I have been the only owner, and I purchased it on 21st November 2018.
It comes in the original box, with the original charger and the HDMI accessory that came with it.

The only reason for sale is that I was travelling a lot with work at the time and since that has died down, I’ve built a desktop.

This is an excellent laptop with a great display and battery life has never let me down.

Go to Original Article
Author:

Amazon tech VP lays out ambitious AWS storage vision

LAS VEGAS –There appears to be no end in sight to the ambitious vision of AWS storage, especially when it comes to file systems.

During an interview with TechTarget, Amazon VP of technology Bill Vass said AWS aims to “enable every customer to be able to move to the cloud.” For example, Amazon could offer any of the approximately 35 file systems that its enterprise customers use, under the FSx product name, based on customer demand, Vass said. FSx stands for File System x, where the “x” can be any file system. AWS launched the first two FSx options, for Lustre and Windows file systems, at its November 2018 Re:Invent conference.

(Editor’s note: Vass said during the original interview that AWS will offer all 35 file systems over time. After the article published, Vass contacted us via email to clarify his statement. He wrote: “FSx is built to offer any type of file system from any vendor. I don’t want it to seem that we have committed to all 35, just that we can if customers want it.”)

AWS cannot support nearly three dozen file systems overnight, but Vass highlighted a new storage feature coming in 2020: a central storage management console similar to the AWS Backup option that unifies backups.

Vass has decision-making oversight over all AWS storage products (except Elastic Block Storage), as well as quantum computing, IoT, robotics, CloudFormation, CloudWatch monitoring, system management, and software-defined infrastructure. Vass has held CEO, COO, CIO, CISO and CTO positions for startups and Fortune 100 companies, as well as the federal government. Before joining Amazon more than five years ago, Vass was president and CEO of Liquid Robotics, which designs and builds autonomous robots for the energy, shipping, defense, communications, scientific, intelligence and environmental industries.

How has the vision for AWS storage changed since the object-based Simple Storage Service (S3) launched in 2006?

Amazon storage
Amazon VP of technology Bill Vass with AWS Snowball appliance.

Bill Vass: Originally, it was very much focused on startups, developers and what we call webscale or web-facing storage. That’s what S3 was all about. Then as we grew in the governments and enterprises, we added things like [write once read many] WORM, [recovery point objective] RPO for cross-region replication, lifecycle management, intelligent tiering, deep archive. We were the first to have high-performance, multi-[availability zone] AZ file systems. Block storage has continued to be a mainstay for databases and things like that. We launched the first high-performance file system that will rival anything on prem with FSx for [high-performance computing] HPC. So, we ran Lustre in there. And Lustre gives you microsecond latency, 100 gigabits per thread, connected directly to your CPU.

The other thing we did at Re:Invent [2018] was the FSx for SMB NTFS Windows. At Re:Invent this year, we launched the ability to replicate that to one, two or three AZs. They added a bunch of extra features to it. But, you can expect us with FSx to offer other file systems as well. There’s about 35 different file systems that enterprises use. We can support many – really anything with FSx. But we will roll them out in order of priority by customer demand.

What about Amazon Elastic File System?

Vass: Elastic File System, which is our NFS 4 file system, has got single-digit millisecond response. That is actually replicating across three separate buildings with three different segments, striping it multiple times. EFS is an elastic multi-tenant file system. FSx is a single-tenant file system. To get microsecond latency, you have to be right there next to the CPU. You can’t have microsecond latency if you’re striping across three different buildings and then acknowledging that.

Do you plan to unify file storage? Or, do you plan to offer a myriad of choices?

Vass: Certainly, they’re all unified and can interoperate with each other. FSx, S3, intelligent tiering, all that kind of stuff, and EFS all work together. That’s already there. However, we don’t think file systems are one size fits all. There’s 35 different file systems, and the point of FSx is to let people have many choices, just like we have with databases or with CPUs or anything like this. You can’t move a load that’s running on GPFS into AWS without making changes for it. So you’d want to offer that as a file system. You can’t move an HPC load without something like FSx Lustre. You can’t move your Windows Home directories into AWS without FSx for Windows. And I would just expect more and more features around EFS, more and more features on S3, more and more features around FSx with more and more options for file systems.

So, you don’t envision unifying file storage.

Vass: There will be a central storage management system coming out where you’ll see it just like we have a central backup system now. So, they’ll be unified at that level. There’ll be a time when you’ll be able to access things with SMB, NFS and object in the same management console and on the same storage in the future. But that’s not really unified, right? Because you still want to have the single-tenant operating environment for your Windows. Microsoft does proprietary extensions on top of SMB, so you’ll need to run Windows underneath that. You can run something like [NetApp] OnTap, which also runs on AWS, by the way. And it does a great job of emulating NFS 4, 3, and SMB. But it’s never going to be 100% Windows compatible. So for that, you’re still going to want to run the Windows-native environment underneath.

I’d love to have one solution that did it all, but when you do that, what you usually end up with is something that does everything, but none of it well. So, you’re still going to want to have your high-performance object storage, block storage, elastic file systems and your single-tenant file systems for the foreseeable future. They’ll all interoperate with each other. They all get 11 nines durability by snapshotting or direct storing. You’re still going to have your archive storage. You don’t really want an archive system that operates the same as the file system or an object system.

How will the management console work to manage all the systems?

Vass: Since we unified backups with AWS Backup, you can take a look at that one place where we’re backing everything up in AWS. Now, we haven’t turned every service on. There’s actually 29 stateful stores in AWS. So, what we’re doing with backup is adding them one after another until they’re all there. You go to one place to back everything up.

We’ll add a storage management console. Today, you would go to an S3 console, an FSx console, an EFS console and a relational database console, then an Aurora console, then an EBS console. There’ll be one system management console that will let you see everything in one place and one place where you’ll be able to manage all of it as well. That’s scheduled for some time next year.

I’ve been hearing from enterprise customers that it can get confusing and overwhelming to keep track of the breadth of AWS storage offerings.

Vass: Let me counter that. We listen to our customers, and I guarantee you at Re:Invent this year, each customer I met with, one of those services that we added was really important to them, because remember, we’re moving everything from on prem to the cloud. … There are customers that want NFS 3 still. There’s customers that want NFS 4. There’s customers that want SMB and NTFS. There’s customers that want object storage. There’s customers that want block storage. There’s customers that want backups. If we did just one, and we told everyone rewrite your apps, it would take forever for people to move.

The best things people can do is get our roadmaps. We disclose our roadmaps under NDA to any customer that asks, and we’ll show them what’s coming and when it’s going to come so that they can have some idea if they’re planning and when we’re going to solve all of their problems. We’ve got 2.2 million customers, and all of them need something. And they have quite a variability of needs that we work to meet. So, it’s necessary to have that kind of innovation. And of course, we see things our customers do all the time.

So, AWS storage is basically going for the ocean and aiming to get every customer away from a traditional storage vendor.

Vass: I wouldn’t say it that way. I’d say we want to enable every customer to be able to use the cloud and Outpost and Snowball and Storage Gateway and all of our products so they can save money, be elastically scaling, have higher durability and better security than they usually do on prem.

Go to Original Article
Author:

Quobyte storage brings satellite imagery down to Earth

3vGeomatics needed storage that handled data from space to assess foundations here on Earth. After considering NAS arrays and object storage, the company chose Quobyte storage for its ability to handle massive file workloads.

The company, based in Vancouver, B.C., deploys special radar equipment to detect millimeter-level ground movements. The goal is to preserve structural integrity and forestall environmental disasters. The firm’s customers include companies in construction, energy exploration, earth science and transportation.

3vGeomatics uses technology known as Interferometric Synthetic Aperture Radar (InSAR) to capture satellite images and overlay them one over another to map deformities on a given area.

“We measure the ground from space, which I think is really cool. We take a stack of satellite radar images of an area over time, and then we compare them,” 3vGeomatics IT manager Joe Chapman said.

Distributed storage to replace NFS

Shortly after Chapman joined 3vGeomatics, he deployed the Quobyte storage file system in 2018. He was hired with a mandate to upgrade outmoded NFS storage.

“I needed storage that is POSIX-compliant. And I needed it to be fast,” Chapman said.

At the time, 3vGeomatics had been planning to expand its compute farm but was already experiencing NFS bottlenecks. The company had a “giant[Oracle] ZFS node” and storage frequently locked up when sending data to its NFS server.

“That was due to the way we manipulate data. It’s very hard on storage,” Chapman said.

He broke his options into three distinct categories to better evaluate the individual technologies: legacy array vendors, object storage and software-defined storage on commodity servers, which includes Quobyte storage and several other vendors.

I needed storage that is POSIX-compliant. And I needed it to be fast.
Joe ChapmanIT manager, 3vGeomatics

He looked at Dell EMC Isilon NAS arrays, but the “storage under the hood is based on NFS.” Object storage didn’t enable 3vGeomatics’ need for high parallelization of data.

Chapman said Quobyte storage proved to be the best fit for his company and its Synthetic Aperture Radar (SAR) imagery. Quobyte natively supports Portable Operating System Interface (POSIX) protocols and, while it supports standard file protocols like NFS, the scale-out file system has its own client to avoid performance lags.

Quobyte founders Felix Hupfeld and Björn Kolbeck are former Google engineers who compare their file system to “Google-like storage.” Quobyte policies bind its file system volumes to storage hardware for data placement across devices. The Quobyte file system also includes built-in analytics and monitoring.

“I wanted to be able to put all our storage on one system and manage it from there,” Chapman said.

Go to Original Article
Author:

For Sale – Synology DS218j 2 Bay NAS

I have for sale a mint condition Synology NAS drive (HDD drives not included).

In perfect working order with Original packaging.
Selling as I have purchased a 4 bay NAS.

I’ll post some images in the next couple of days once I’ve had chance to Remove it from my system and setup the new NAS.
In the meantime I’ve posted a link above for product details on Synology’s website.

Location
Wimblington, Cambridgeshire
Price and currency
£100
Delivery cost included
Delivery Is Included
Prefer goods collected?
I prefer the goods to be collected
Advertised elsewhere?
Not advertised elsewhere
Payment method
Bank Transfer

Go to Original Article
Author:

HCI storage adoption rises as array sales slip

The value and volume of data keep growing, yet in 2019 most primary storage vendors reported a drop in sales.

Part of that has to do with companies moving data to the cloud. It is also being redistributed on premises, moving from traditional storage arrays to hyper-converged infrastructure (HCI) and data protection products that have expanded into data management.

That helps explain why Dell Technologies bucked the trend of storage revenue declines last quarter. A close look at Dell’s results shows its gains came from areas outside of traditional primary storage arrays that have been flat or down from its rivals.

Dell’s storage revenue of $4.15 billion for the quarter grew 7% over last year, but much of Dell’s storage growth came from HCI and data protection. According to Dell COO Jeff Clarke, orders of VxRail HCI storage appliances increased 82% over the same quarter in 2018. Clarke said new Data Domain products also grew significantly, although Dell provided no revenue figures for backup.

Hyper-converged products combine storage, servers and virtualization in one box. VxRail, which relies on vSAN software from Dell-owned VMware running on Dell PowerEdge, appears to be cutting in on sales of both independent servers and storage. Dell server revenue declined around 10% year-over-year, around the same as rival Hewlett Packard Enterprise’s (HPE) server decline.

“We’re in this data era,” Clarke said on Dell’s earnings call last week. “The amount of data created is not slowing. It’s got to be stored, which is probably why we are seeing a slightly different trend from the compute side to the storage side. But I would point to VxRail hyper-convergence, where we’ll bring computing and storage together, helping customers build on-prem private clouds.”

The amount of data created is not slowing. It’s got to be stored.
Jeff ClarkeCOO, Dell

Dell is counting on a new midrange storage array platform to push storage revenue in 2020. Clarke said he expected those systems to start shipping by the end of January.

Dell’s largest storage rivals have reported a pause in spending, partially because of global conditions such as trade wars and tariffs. NetApp revenues have fallen year-over-year each of the last three quarters, including a 9.6% dip to $1.38 billion last quarter. HPE said its storage revenue of $848 million dropped 12% from last year. HPE’s Nimble Storage midrange array platform grew 2% and Simplivity HCI increased 14% year-over-year, a sign that 3PAR enterprise arrays fell and the vendor’s new Primera flagship arrays have not yet generated meaningful sales.

Jeff Clarke, Dell COO
Dell Technologies COO Jeff Clarke

IBM storage has also declined throughout the year, dropping 4% year-over-year to $434 million last quarter. Pure Storage’s revenue of $428 million last quarter increased 16% from last year, but Pure had consistently grown revenue at significantly higher rates throughout its history.

Meanwhile, HCI storage revenue is picking up. Nutanix last week reported a leveling of revenue following a rocky start to 2019. Related to VxRail’s increase, VMware said its vSAN license bookings had increased 35%. HPE’s HCI sales grew, while overall storage dropped. Cisco did not disclose revenue for its HyperFlex HCI platform, but CEO Chuck Robbins called it out for significant growth last quarter.

Dell/VMware and Nutanix still combine for most of the HCI storage market. Nutanix’s revenue ($314.8 million) and subscription ($380.0 million) results were better than expected last quarter, although both numbers were around the same as a year ago. It’s hard to accurately measure Nutanix’s growth from 2018 because the vendor switched to subscription billing. But Nutanix added 780 customers and its 66 deals of more than $1 million were its most ever. And the total value of its customer contracts came to $305 million, up 9% from a year ago.

Nutanix’s revenue shift came after the company switched to a software-centric model. It no longer records revenue from the servers it ships its software on. Nutanix and VMware are the dominant HCI software vendors.

“It’s just the two of us, us and VMware,” Nutanix CEO Dheeraj Pandey said in an interview after his company’s earnings call. “Hyper-convergence now is really driven by software as opposed to hardware. I think it was a battle that we had to win over the last three or four years, and the dust has finally settled and people see it’s really an operating system play. We’re making it all darn simple to operate.”

Go to Original Article
Author:

Pure Storage cloud sales surge, but earnings miss the target

Add Pure Storage to the list of infrastructure vendors that sense a softening global demand. The all-flash pioneer put the best face on last quarter’s financial numbers, focusing on solid margins and revenue, while downplaying its second earnings miss in the last three quarters.

Demand for Pure Storage cloud services boosted revenue to $428.4 million for the quarter that ended Oct. 31. That’s up 15% year over year, but lower than the $440 million expectation on Wall Street.

Pure Storage launched as a startup in 2009 and has grown steadily to a publicly traded company with $1.5 billion in revenue. On Pure’s earnings call last week, CEO Charles Giancarlo blamed the revenue miss on declining flash prices. Giancarlo said U.S. trade tensions with China and uncertainty surrounding Brexit create economic headwinds for infrastructure vendors — concerns also voiced recently by rivals Dell EMC and NetApp.

Pure: Looking for bright spot in cloud

Like most major storage vendors, Pure is rebranding to tap into the burgeoning demand for hybrid cloud. Recent additions to the Pure Storage cloud portfolio include Cloud Block Store, which allows users to run Pure’s FlashArray systems in Amazon Web Services, and consumption-based Pure as a Service (ES2), formerly Pure Evergreen.

Pure said deferred licensing revenue of $643 million rose 39%, fueled by record growth of ES2 sales. The Pure Storage cloud strategy resonates with customers that want storage with cloudlike agility, company executives said.

“Data storage still remains the least cloudlike layer of technology in the data center. Delivering data storage in an enterprise is still an extraordinarily manual process with storage arrays highly customized and dedicated to particular workloads,” Giancarlo said.

Pure claims it added nearly 400 customers last quarter, bringing its total to more than 7,000. That includes cloud IT services provider ServiceNow, which implements Pure Storage all-flash storage to underpin its production cloud.

“Companies are realizing IT services are not their main line of business — that a cloud-hosted services model is generally better. We’re right in the middle of that. We build enterprise data services and do all the work to manage the cloud” for corporate customers, Keith Martin, ServiceNow’s director of cloud capacity engineering, told SearchStorage in an interview this year.

Pure will use its increased product margin — which jumped 4.5 points last quarter to 73% — to ensure it “won’t lose on price” in competitive deals, outgoing president David Hatfield said.

A strong pipeline of Pure Storage cloud and on-premises deals gives it the ability to bundle multiple products and sell more terabytes. “It’s just taking a little bit longer from a deal-push perspective, but our win rates are holding nicely,” Hatfield said.

Hatfield said he is stepping away from president duties to deal with a family health issue, but he will remain Pure’s vice chairman and special advisor to Giancarlo. Former Riverbed Technology CEO Paul Mountford was introduced as Pure’s new COO. Kevan Krysler, most recently VMware’s senior vice president of finance and chief accounting officer, will take over in December as Pure’s CFO. He will replace Tim Ritters, who announced his departure in August.

Go to Original Article
Author:

New DataCore vFilO software pools NAS, object storage

DataCore Software is expanding beyond block storage virtualization with new file and object storage capabilities for unstructured data.

Customers can use the new DataCore vFilO to pool and centrally manage disparate file servers, NAS appliances and object stores located on premises and in the cloud. They also have the option to install vFilO on top of DataCore’s SANsymphony block virtualization software, now rebranded as DataCore SDS

DataCore CMO Gerardo Dada said customers that used the block storage virtualization asked for similar capabilities on the file side. Bringing diverse systems under central management can give them a consistent way to provision, encrypt, migrate and protect data and to locate and share files. Unifying the data also paves the way for customers to use tools such as predictive analytics, Dada said.

Global namespace

The new vFilO software provides a scale-out file system for unstructured data and virtualization technology to abstract existing storage systems. A global namespace facilitates unified access to local and cloud-based data though standard NFS, SMB, and S3 protocols. On the back end, vFilO communicates with the file systems through parallel NFS to speed response times. The software separates metadata from the data to facilitate keyword queries, the company said.

Users can set policies at volume or more granular file levels to place frequently accessed data on faster storage and less active data on lower cost options. They can control the frequency of snapshots for data protection, and they can archive data in on-premises or public cloud object storage in compressed and deduplicated format to reduce costs, Dada said.

DataCore’s vFilO supports automated load balancing across the diverse filers, and users can add nodes to scale out capacity and performance. The minimum vFilo configuration for high availability is four virtual machines, with one node managing the metadata services and the other handling the data services, Dada said.

DataCore vFilo screenshot
New DataCore vFilO software can pool and manage disparate file servers, NAS appliances and object stores.

File vs. object storage

Steven Hill, a senior analyst of storage technologies at 451 Research, said the industry in general would need to better align file and object storage moving forward to address the emerging unstructured data crisis.

“Most of our applications still depend on file systems, but metadata-rich object is far better suited to longer-term data governance and management — especially in the context of hybrid IT, where much of the data resident in the cloud is based on efficient and reliable objects, ” Hill said.

“File systems are great for helping end users remember what their data is and where they put it, but not very useful for identifying and automating policy-based management downstream,” Hill added. “Object storage provides a highly-scalable storage model that’s cloud-friendly and supports the collection of metadata that can then be used to classify and manage that data over time.”

DataCore expects the primary use cases for vFilO to include consolidating file systems and NAS appliances. Customers can use vFilo to move unused or infrequently accessed files to cheaper cloud object storage to free up primary storage space. They can also replicate files for disaster recovery.

Eric Burgener, a research vice president at IDC, said unstructured data is a high growth area. He predicts vFilO will be most attractive to the company’s existing customers. DataCore claims to have more than 10,000 customers.

“DataCore customers already liked the functionality, and they know how to manage it,” Burgener said. “If [vFilO] starts to get traction because of its ease of use, then we may see more pickup on the new customer side.”

Camberley Bates, a managing director and analyst at Evaluator Group, expects DataCore to focus on the media market and other industries needing high performance.

Pricing for vFilO

Pricing for vFilO is based on capacity consumption, with a 10 TB minimum order. One- and three-year subscriptions are available, with different pricing for active and inactive data. A vFilO installation with 10 TB to 49 TB of active data costs $345 per TB for a one-year subscription and $904 per TB for a three-year subscription. For the same capacity range of inactive data, vFilo would cost $175 per TB for a one-year subscription and $459 per TB for a three-year subscription. DataCore offers volume discounts for customers with higher capacity deployments.

The Linux-based vFilO image can run on a virtual machine or on commodity bare-metal servers. Dada said DataCore recommends separate hardware for the differently architected vFilO and SANsymphony products to avoid resource contention. Both products have plugins for Kubernetes and other container environments, Dada added.

The vFilO software became available this week as a software-only product, but Dada said the company could add an appliance if customers and resellers express enough interest. DataCore launched a hyper-converged infrastructure appliance for SANsymphony over the summer. 

DataCore incorporated technology from partners and open source projects into the new vFilO software, but Dada declined to identify the sources.

Go to Original Article
Author:

HPE Cray ClusterStor E1000 arrays tackle converged workloads

Supercomputer maker Cray has pumped out revamped high-density ClusterStor storage, its first significant product advance since being acquired by Hewlett Packard Enterprise.

The new Cray ClusterStor E1000 launched this week, six months after HPE’s $1.3 billion acquisition of Cray in May. Engineering of the E1000 began before the HPE acquisition.

Data centers can mix the dense Cray ClusterStor E1000 all-flash and disk arrays to build ultrafast “exascale” storage clusters that converge processing for AI, modeling and simulation and similar data sets, said Ulrich Plechschmidt, Cray lead manager.

The E1000 arrays run a hardened version of the Lustre open source parallel file system. The all-flash E1000 provides 4.5 TB of raw storage per SSD rack, with expansion shelves that add up to 4.6 TB. The all-flash model system potentially delivers up to 1.6 TB of throughout per second and 50 million IOPS per SSD rack, while an HDD rack is rated at 120 Gbps and 10 PB of raw capacity.

When fully built out, Plechschmidt said ClusterStor can scale to 700 PB of usable capacity in a single system, with throughput up to 10 PB per second.

Cray software stack

Cray ClusterStor disk arrays pool flash and disk within the same file system. ClusterStor E1000 includes Cray-designed PCIe 4.0 storage servers that serve data from NVMe SSDs and spinning disk. Cray’s new Slingshot 200 Gbps interconnect top-of-rack switches manage storage traffic.

The most impressive work Cray did is on the software side. You might have to stage data in 20 different containers at the same time, each one outfitted differently. … That’s a very difficult orchestration process.
Steve ConwayCOO and senior research vice president, Hyperion Research

Newly introduced ClusterStor Data Services manage orchestration and data tiering, which initially will be available as scripted tiering for manually invoking Lustre software commands. Automated data movement and read-back/write-through caching are on HPE’s Cray roadmap.

While ClusterStor E100 hardware has massive density and low-latency throughout, Cray invested significantly in upgrading its software stack, said Steve Conway, COO and senior research vice president at Hyperion Research, based in St. Paul, Minn.

“To me, the most impressive work Cray did is on the software side. You might have to stage data in 20 different containers at the same time, each one outfitted differently. And you have to supply the right data at the right time and might have to solve the whole problem in milliseconds. That’s a very difficult orchestration process,” Conway said.

The ClusterStor odyssey

HPE is the latest in a string of vendors to take ownership of ClusterStor. Seagate Technology acquired original ClusterStor developer Xyratex in 2013, then in 2017 sold ClusterStor to Cray, which had been a Seagate OEM partner.

Cray ClusterStor E1000
HPE-owned Cray released new all-flash and disk ClusterStor arrays for AI, containerized workloads.

HPE leads the high-performance computing (HPC) market in overall revenue, but it has not had a strong presence in the high end of the supercomputing market. Buying Cray allows HPE to sell more storage for exascale computing, which represents a thousandfold increase above petabyte-scale processing computing power. These high-powered exascale systems are priced beyond the budgets of most commercial enterprises.

Cray’s Shasta architecture underpins three large supercomputing sites at federal research labs: Argonne National Laboratory in Lemont, Ill.;. Lawrence Livermore National Laboratory in Livermore, Calif.; and Oak Ridge National Laboratory in Oak Ridge, Tenn.

Cray last year won a $146 million federal contract to architect a new supercomputer at Livermore’s National Energy Research Scientific Computing Center. That system will use Cray ClusterStor storage.

Conway said Cray and other HPC competitors are under pressure to expand to address newer abstraction methods for processing data, including AI, container storage and microservices architecture.

“You used to think of supercomputers as a single-purpose steak knife. Now they have to be a multipurpose Swiss Army knife. The newest generation of supercomputers are all about containerization and orchestration of data on premises,” Conway said. “They have to be much more heterogeneous in what they do, and the storage has to follow suit.”

Go to Original Article
Author:

IBM Spectrum Protect supports container backups

IBM Storage will tackle data protection for containerized and cloud-based workloads with upcoming updates to its Spectrum Protect Plus backup product and Red Hat OpenShift container platform.

Like other vendors, IBM has offered primary storage options for container-based applications. Now IBM Spectrum Protect Plus will support backup and recovery of persistent container volumes for customers who use Kubernetes orchestration engines.

IBM Spectrum Protect Plus supports the Container Storage Interface (CSI) to enable Kubernetes users to schedule snapshots of persistent Ceph storage volumes, according to IBM. The company said the Spectrum Protect backup software offloads copies of the snapshots to repositories outside Kubernetes production environments.

IBM will offer a tech preview of the container backup support in the OpenShift platform that it gained through its Red Hat acquisition. The tech preview is scheduled for this year with general availability expected in the first quarter of 2020, subject to the availability of CSI snapshot support in Red Hat OpenShift, according to Eric Herzog, CMO and vice president of world storage channels at IBM.

“The problem with Kubernetes is there’s really no standard storage architecture. So you’re starting to see all of the vendors scramble to implement CSI driver support, which links your Kubernetes containers with backend storage,” said Steve McDowell, a senior analyst at Moor Insights and Strategy.

CSI snapshots

McDowell said IBM and other vendors are stepping up to provide CSI drivers for general-purpose backend storage for containers. He said few, if any, tier one vendors support CSI snapshots for data protection of Kubernetes clusters.

But enterprise demand is still nascent for persistent storage for containerized applications and, by extension, backup and disaster recovery, according to IDC research manager Andrew Smith. He said many organizations are still in the early discovery or initial proof of concept phase.

Smith said IBM can fill a gap in the OpenShift Kubernetes ecosystem if it can establish Spectrum Protect as a platform for data protection and management moving forward.

Randy Kerns, a senior strategist and analyst at Evaluator Group, said early adopters often stand up their container-based applications separately from their virtual machine environments.

“Now you’re starting to see them look and say, ‘What data protection software do I have that’ll work with containers? And, does that work in my virtual machine environment as well?'” Kerns said. “This is an early stage thing for a lot of customers, but it’s really becoming more current as we go along. OpenShift is going to be one of the major deployment environments for containers, and IBM and Red Hat have a close relationship now.”

IBM Spectrum Protect Plus for VMware

In virtual environments, VMware administrators will be able to deploy IBM Spectrum Protect Plus in VMware Cloud on AWS. IBM said Spectrum Protect would support VMware Cloud on AWS, in addition to the IBM Cloud and various on-premises options available in the past. Herzog said IBM Spectrum Protect Plus would support backups to additional public clouds starting in 2020, in keeping with the storage division’s long-standing multi-cloud strategy.

Also this week, IBM introduced a new TS7770 Virtual Tape Library built with its latest Power 9 processors and higher density disks. The TS7770 will target customers of IBM’s new z15 mainframe, Herzog said.

Go to Original Article
Author: