Tag Archives: storage

Investments in data storage vendors topped $2B in 2019

Data storage vendors received $2.1 billion in private funding in 2019, according to SearchStorage.com analysis of data from websites that track venture funding. Not surprisingly, startups in cloud backup, data management and ultrafast scale-out flash continue to attract the greater interest from private investors.

Six private data storage vendors closed funding rounds over more than $100 million in 2019, all in the backup/cloud sector. It’s a stretch to call most of these startups — all but one of the companies have been selling products for years.

A few vendors with disruptive storage hardware also got decent chunks of money to build out arrays and storage systems, although these rounds were much smaller than the data protection vendors received.

According to a recent report by PwC/ CB Insights MoneyTree, 213 U.S.-based companies closed funding rounds of at least $100 million last year. The report pegged overall funding for U.S. companies at nearly $108 billion, down 9% year on year but well above the $79 billion total from 2017.

Despite talk of a slowing global economy, data growth is expected to accelerate for years to come. And as companies mine new intelligence from older data, data centers need more storage and better management than ever. The funding is flowing more to vendors that manage that data than to systems that store it.

“Investors don’t lead innovation; they follow innovation. They see a hot area that looks like it’s taking off, and that’s when they pour money into it,” said Marc Staimer, president of Dragon Slayer Consulting in Beaverton, Ore.

Here is a glance at the largest funding rounds by storage companies in 2019, starting with software vendors:

Kaseya Limited, $500 million: Investment firm TPG will help Kaseya further diversify the IT services it can offer to manage cloud providers. Kaseya has expanded into backup in recent years, adding web-monitoring software ID Agent last year. That deal followed earlier pickups of Cloud Spanning Apps and Unitrends.

Veeam Software, $500 million: Veeam pioneered backup of virtual machines and serves many Fortune 500 companies. Insight Partners invested half of a billion dollars in Veeam in January 2019, and followed up by buying Veeam outright in January 2020 for a $5 billion valuation. That may lead to an IPO. Veeam headquarters are shifting to the U.S. from Switzerland, and Insight plans to focus on landing more U.S. customers.

Rubrik, $261 million: The converged storage vendor has amassed $553 million since launching in 2014. The latest round of Bain Capital investment reportedly pushed Rubrik’s valuation north of $3 billion. Flush with investment, Rubrik said it’s not for sale — but is shopping to acquire hot technologies, including AI, data analytics and machine learning.

Clumio, $175 million: Sutter Hill Ventures provided $40 million in April, on top of an $11 million 2017 round. It then came back for another $135 million bite in November, joined by Altimeter Capital. Clumio is using the money to add cybersecurity to its backup as a service in Amazon Web Services.

Acronis, $147 million: Acronis was founded in 2003, so it’s halfway into its second decade. But the veteran data storage vendor has a new focus of backup blended with cybersecurity and privacy, similar to Clumio. The Goldman Sachs-led funding helped Acronis acquire 5nine to manage data across hybrid Microsoft clouds.

Druva, $130 million: Viking Global Investors led a six-participant round that brought Druva money to expand its AWS-native backup and disaster recovery beyond North America to international markets. Druva since has added low-cost tiering to Amazon Glacier, and CEO Jaspreet Singh has hinted Druva may pursue an IPO.

Notable 2019 storage funding rounds

Data storage startups in hardware

Innovations in storage hardware underscore the ascendance of flash in enterprise data centers. Although fewer in number, the following storage startups are advancing fabrics-connected devices for high-performance workloads.

Over time, these data storage startups may mature to be able to deliver hardware that blends low latency, high IOPS and manageable cost, emerging as competitors to leading array vendors. For now, these products will have limited market to companies that needs petabytes (PB) (or more) of storage, but the technologies bear watching due to their speed, density and performance potential.

Lightbits Labs, $50 million: The Israel-based startup created the SuperSSD array for NVMe flash. The Lightbits software stack converts generic in-the-box TCP/IP into a switched Ethernet fabric, presenting all storage as a single giant SSD. SuperSSD starts at 64 PB before data reduction. Dell EMC led Lightbits’ funding, with contributions from Cisco and Micron Technology.

Vast Data, $40 million: Vast’s Universal Storage platform is not for everyone. Minimum configuration starts at 1 PB. Storage class memory and low-cost NAND are combined for unified block, file and object storage. Norwest Venture Partners led the round, with participation from Dell Technologies Capital and Goldman Sachs.

Honorable mentions in hardware include Pavilion Data Systems and Liqid. Pavilion is one of the last remaining NVMe all-flash startups, picking up $25 million in a round led by Taiwania Capital and RPS Ventures to flesh out its Hyperparallel Flash Array.

Liqid is trying to break into composable infrastructure, a term coined by Hewlett Packard Enterprise to signify the ability for data centers to temporarily lease capacity and hardware by the rack. Panorama Point Partners provided $28 million to help the startup flesh out its Liqid CI software platform.

Go to Original Article
Author:

For Sale – Huawei Matebook X Pro – i7, 512GB, MX150

I am selling my Huawei Matebook X Pro, i7, 8GB RAM, 512GB Storage, GPU MX150.
It is in excellent condition and I can’t find any scratches or dings on it anywhere.

I bought it from the Microsoft Store so I have been the only owner, and I purchased it on 21st November 2018.
It comes in the original box, with the original charger and the HDMI accessory that came with it.

The only reason for sale is that I was travelling a lot with work at the time and since that has died down, I’ve built a desktop.

This is an excellent laptop with a great display and battery life has never let me down.

Go to Original Article
Author:

Red Hat OpenShift Container Storage seeks to simplify Ceph

The first Red Hat OpenShift Container Storage release to use multiprotocol Ceph rather than the Gluster file system to store application data became generally available this week. The upgrade comes months after the original late-summer target date set by open source specialist Red Hat.

Red Hat — now owned by IBM — took extra time to incorporate feedback from OpenShift Container Storage (OCS) beta customers, according to Sudhir Prasad, director of product management in the company’s storage and hyper-converged business unit.

The new OCS 4.2 release includes Rook Operator-driven installation, configuration and management so developers won’t need special skills to use and manage storage services for Kubernetes-based containerized applications. They indicate the capacity they need, and OCS will provision the available storage for them, Prasad said.

Multi-cloud support

OCS 4.2 also includes multi-cloud support, through the integration of NooBaa gateway technology that Red Hat acquired in late 2018. NooBaa facilitates dynamic provisioning of object storage and gives developers consistent S3 API access regardless of the underlying infrastructure.

Prasad said applications become portable and can run anywhere, and NooBaa abstracts the storage, whether AWS S3 or any other S3-compatible cloud or on-premises object store. OCS 4.2 users can move data between cloud and on-premises systems without having to manually change configuration files, a Red Hat spokesman added.

Customers buy OCS to use with the Red Hat OpenShift Container Platform (OCP), and they can now manage and monitor the storage through the OCP console. Kubernetes-based OCP has more than 1,300 customers, and historically, about 40% to 50% attached to OpenShift Container Storage, a Red Hat spokesman said. OCS had about 400 customers in May 2019, at the time of the Red Hat Summit, according to Prasad.

One critical change for Red Hat OpenShift Container Storage customers is the switch from file-based Gluster to multiprotocol Ceph to better target data-intensive workloads such as artificial intelligence, machine learning and analytics. Prasad said Red Hat wanted to give customers a more complete platform with block, file and object storage that can scale higher than the product’s prior OpenStack S3 option. OCS 4.2 can support 5,000 persistent volumes and will support 10,000 in the upcoming 4.3 release, according to Prasad.

Migration is not simple

Although OCS 4 may offer important advantages, the migration will not be a trivial one for current customers. Red Hat provides a Cluster Application Migration tool to help them move applications and data from OCP 3/OCS 3 to OCP 4/OCS 4 at the same time. Users may need to buy new hardware, unless they can first reduce the number of nodes in their OpenShift cluster and use the nodes they free up, Prasad confirmed.

“It’s not that simple. I’ll be upfront,” Prasad said, commenting on the data migration and shift from Gluster-based OCS to Ceph-backed OCS. “You are moving from OCP 3 to OCP 4 also at the same time. It is work. There is no in-place migration.”

One reason that Red Hat put so much emphasis on usability in OCS 4.2 was to abstract away the complexity of Ceph. Prasad said Red Hat got feedback about Ceph being “kind of complicated,” so the engineering team focused on simplifying storage through the operator-driven installation, configuration and management.

“We wanted to get into that mode, just like on the cloud, where you can go and double-click on any service,” Prasad said. “That took longer than you would have expected. That was the major challenge for us.”

OpenShift Container Storage roadmap

The original OpenShift Container Storage 4.x roadmap that Red Hat laid out last May at its annual customer conference called for a beta release in June or July, OCS 4.2 general availability in August or September, and a 4.3 update in December 2019 or January 2020. Prasad said February is the new target for the OCS 4.3 release.

The OpenShift Container Platform 4.3 update became available this week, with new security capabilities such as Federal Information Processing Standard (FIPS)-compliant encryption. Red Hat eventually plans to return to its prior practice of synchronizing new OCP and OCS releases, said Irshad Raihan, the company’s director of storage product marketing.

The Red Hat OpenShift Container Storage 4.3 software will focus on giving customers greater flexibility, such as the ability to choose the type of disk they want, and additional hooks to optimize the storage. Prasad said Red Hat might need to push its previously announced bare-metal deployment support from OCS 4.3 to OCS 4.4.

OCS 4.2 supports converged-mode operation, with compute and storage running on the same node or in the same cluster. The future independent mode will let OpenShift use any storage backend that supports the Container Storage Interface. OCS software would facilitate access to the storage, whether it’s bare-metal servers, legacy systems or public cloud options.

Alternatives to Red Hat OpenShift Container Storage include software from startups Portworx, StorageOS, and MayaData, according to Henry Baltazar, storage research director at 451 Research. He said many traditional storage vendors have added container plugins to support Kubernetes. The public cloud could appeal to organizations that don’t want to buy and manage on-premises systems, Baltazar added.

Baltazar advised Red Hat customers moving from Gluster-based OCS to Ceph-based OCS to keep a backup copy of their data to restore in the event of a problem, as they would with any migration. He said any users who are moving a large data set to public cloud storage needs to factor in network bandwidth and migration time and consider egress changes if they need to bring the data back from the cloud.

Go to Original Article
Author:

Major storage vendors map out 2020 plans

The largest enterprise storage vendors face a common set of challenges and opportunities heading into 2020. As global IT spending slows and storage gets faster and frequently handles data outside the core data center, primary storage vendors must turn to cloud, data management and newer flash technologies.

Each of the major storage vendors has its own plans for dealing with these developments. Here is a look at what the major primary storage vendors did in 2019 and what you can expect from them in 2020.

Dell EMC: Removing shadows from the clouds

2019 in review: Enterprise storage market leader Dell EMC spent most of 2019 bolstering its cloud capabilities, in many cases trying to play catch-up. New cloud products include VMware-orchestrated Dell EMC Cloud Platform arrays that integrate Unity and PowerMax storage, coupled with VxBlock converged and VxRail hyper-converged infrastructure.

The new Dell EMC Cloud gear allows customers to build and deploy on-premises private clouds with the agility and scale of the public cloud — a growing need as organizations dive deeper into AI and DevOps.

What’s on tap for 2020: Dell EMC officials have hinted at a new Power-branded midrange storage system for several years, and a formal unveiling of that product is expected in 2020. Then again, Dell initially said the next-generation system would arrive in 2019. Customers with existing Dell EMC midrange storage likely won’t be forced to upgrade, at least not for a while. The new storage platform will likely converge features from Dell EMC Unity and SC Series midrange arrays with an emphasis on containers and microservices.

Dell will enhance its tool set for containers to help companies deploy microservices, said Sudhir Srinivasan, the CTO of Dell EMC storage. He said containers are a prominent design featured in the new midrange storage. 

“Software stacks that were built decades ago are giant monolithic pieces of code, and they’re not going to survive that next decade, which we call the data decade,” Srinivasan said. 

Hewlett Packard Enterprise’s eventful year

2019 in review: In terms of product launches and partnerships, Hewlett Packard Enterprise (HPE) had a busy year in 2019. HPE Primera all-flash storage arrived in late 2019,  and HPE expects customers will slowly transition from its flagship 3PAR platform. Primera supports NVMe flash, embedding custom chips in the chassis to support massively parallel data transport on PCI Express lanes. The first Primera customer, BlueShore Financial, received its new array in October.

HPE bought supercomputing giant Cray to expand its presence in high-performance computing, and made several moves to broaden its hyper-converged infrastructure options. HPE ported InfoSight analytics to HPE SimpliVity HCI, as part of the move to bring the cloud-based predictive tools picked up from Nimble Storage across all HPE hardware. HPE launched a Nimble dHCI disaggregated HCI product and partnered with Nutanix to add Nutanix HCI technology to HPE GreenLake services while allowing Nutanix to sell its software stack on HPE servers.

It capped off the year with HPE Container Platform, a bare-metal system to make it easier to spin up Kubernetes-orchestrated containers on bare metal. The Container Platform uses technology from recent HPE acquisitions MapR and BlueData.

What’s on tap for 2020: HPE vice president of storage Sandeep Singh said more analytics are coming in response to customer calls for simpler storage. “An AI-driven experience to predict and prevent issues is a big game-changer for optimizing their infrastructure. Customers are placing a much higher priority on it in the buying motion,” helping to influence HPE’s roadmap, Singh said.

It will be worth tracking the progress of GreenLake as HPE moves towards its goal of making all of its technology available as a service by 2022.

Hitachi Vantara: Renewed focus on traditional enterprise storage

2019 in review: Hitachi Vantara renewed its focus on traditional data center storage, a segment it had largely conceded to other array vendors in recent years. Hitachi underwent a major refresh of the Hitachi Virtual Storage Platform (VSP) flash array in 2019. The VSP 5000 SAN arrays scale to 69 PB of raw storage, and capacity extends higher with hardware-based deduplication in its Flash Storage Modules. By virtualizing third-party storage behind a VSP 5000, customers can scale capacity to 278 PB.

What’s on tap for 2020: The VSP5000 integrates Hitachi Accelerated Fabric networking technology that enables storage to scale out and scale up. Hitachi this year plans to phase in the networking to other high-performance storage products, said Colin Gallagher, a Hitachi vice president of infrastructure products.

“We had been lagging in innovation, but with the VSP5000, we got our mojo back,” Gallagher said.

Hitachi arrays support containers, and Gallagher said the vendor is considering whether it needs to evolve its support beyond a Kubernetes plugin, as other vendors have done. Hitachi plans to expand data management features in Hitachi Pentaho analytics software to address AI and DevOps deployments. Gallagher said Hitachi’s data protection and storage as a service is another area of focus for the vendor in 2020.

IBM: hybrid cloud, with cyber-resilient storage

2019 in review: IBM brought out the IBM Elastic Storage Server 3000, an NVMe-based array packaged with IBM Spectrum Scale parallel file storage. Elastic Storage Server 3000 combines NVMe flash and containerized software modules to provide faster time to deployment for AI, said Eric Herzog, IBM’s vice president of world storage channels.

In addition, IBM added PCIe-enabled NVMe flash to Versastack converged infrastructure and midrange Storwize SAN arrays.

What to expect in 2020: Like other storage vendors, IBM is trying to navigate the unpredictable waters of cloud and services. Its product development revolves around storage that can run in any cloud. IBM Cloud Services enables end users to lease infrastructure, platforms and storage hardware as a service. The program has been around for two years, and will add IBM software-defined storage to the mix this year. Customers thus can opt to purchase hardware capacity or the IBM Spectrum suite in an OpEx model. Non-IBM customers can run Spectrum storage software on qualified third-party storage.

“We are going to start by making Spectrum Protect data protection available, and we expect to add other pieces of the Spectrum software family throughout 2020 and into 2021,” Herzog said.

Another IBM development to watch in 2020 is how its $34 billion acquisition of Red Hat affects either vendor’s storage products and services.

NetApp: Looking for a rebound

2019 in review: Although spending slowed for most storage vendors in 2019, NetApp saw the biggest decline. At the start of 2019, NetApp forecast annual sales at $6 billion, but poor sales forced NetApp to slash its guidance by around 10% by the end of the year.

NetApp CEO George Kurian blamed the revenue setbacks partly on poor sales execution, a failing he hopes will improve as NetApp institutes better training and sales incentives. The vendor also said goodbye to several top executives who retired, raising questions about how it will deliver on its roadmap going forward.

What to expect in 2020: In the face of the turbulence, Kurian kept NetApp focused on the cloud. NetApp plowed ahead with its Data Fabric strategy to enable OnTap file services to be consumed, via containers, in the three big public clouds.  NetApp Cloud Data Service, available first on NetApp HCI, allows customers to consume OnTap storage locally or in the cloud, and the vendor capped off the year with NetApp Keystone, a pay-as-you-go purchasing option similar to the offerings of other storage vendors.

Although NetApp plans hardware investments, storage software will account for more revenue as companies shift data to the cloud, said Octavian Tanase, senior vice president of the NetApp OnTap software and systems group.

“More data is being created outside the traditional data center, and Kubernetes has changed the way those applications are orchestrated. Customers want to be able to rapidly build a data pipeline, with data governance and mobility, and we want to try and monetize that,” Tanase said.

Pure Storage: Flash for backup, running natively in the cloud

2019 in review: The all-flash array specialist broadened its lineup with FlashArray//C SAN arrays and denser FlashBlade NAS models. FlashArray//C extends the Pure Storage flagship with a model that supports Intel Optane DC SSD-based MemoryFlash modules and quad-level cell NAND SSDs in the same system.

Pure also took a major step on its journey to convert FlashArray into a unified storage system by acquiring Swedish file storage software company Compuverde. It marked the second acquisition in as many years for Pure, which acquired deduplication software startup StorReduce in 2018.

What to expect in 2020: The gap between disk and flash prices has narrowed enough that it’s time for customers to consider flash for backup and secondary workloads, said Matt Kixmoeller, Pure Storage vice president of strategy.

“One of the biggest challenges — and biggest opportunities — is evangelizing to customers that, ‘Hey, it’s time to look at flash for tier two applications,'” Kixmoeller said.

Flexible cloud storage options and more storage in software are other items on Pure’s roadmap items. Cloud Block Store, which Pure introduced last year, is just getting started, Kixmoeller said, and is expected to generate lots of attention from customers. Most vendors support Amazon Elastic Block Storage by sticking their arrays in a colocation center and running their operating software on EBS, but Pure took a different approach. Pure reengineered the backend software layer to run natively on Amazon S3.

Go to Original Article
Author:

For Sale – Huawei Matebook X Pro – i7, 512GB, MX150

I am selling my Huawei Matebook X Pro, i7, 8GB RAM, 512GB Storage, GPU MX150.
It is in excellent condition and I can’t find any scratches or dings on it anywhere.

I bought it from the Microsoft Store so I have been the only owner, and I purchased it on 21st November 2018.
It comes in the original box, with the original charger and the HDMI accessory that came with it.

The only reason for sale is that I was travelling a lot with work at the time and since that has died down, I’ve built a desktop.

This is an excellent laptop with a great display and battery life has never let me down.

Go to Original Article
Author:

Box vs. Dropbox outages in 2019

In this infographic, we present a timeline of significant service disruptions in 2019 for Box vs. Dropbox.

Box vs. Dropbox outages in 2019

Cloud storage providers Box and Dropbox self-report service disruptions throughout each year. In 2019, Dropbox posted publicly about eight incidents; Box listed more than 50. But the numbers don’t necessarily provide an apples-to-apples comparison, because each company gets to choose which incidents to disclose.

This infographic includes significant incidents that prevented users from accessing Box or Dropbox in 2019, or at least from uploading and downloading documents. It excludes outages that appeared to last 10 minutes or fewer, as well as incidents labeled as having only “minor” or “medium” impact.

To view the full list of 2019 incidents for Box vs. Dropbox, visit status.box.com and status.dropbox.com

Go to Original Article
Author:

Cloudian CEO: AI, IoT drive demand for edge storage

AI and IoT is driving demand for edge storage as data is being created faster than it can be reasonably moved across clouds, object storage vendor Cloudian’s CEO said.

Cloudian CEO Michael Tso said “Cloud 2.0” is giving rise to the growing importance of edge storage among other storage trends. He said customers are getting smarter about how they use the cloud, and that’s leading to growing demand for products that can support private and hybrid clouds. He also detects an increased demand for resiliency against ransomware attacks.

We spoke with Tso about these trends, including the Edgematrix subsidiary Cloudian launched in September 2019 that focuses on AI use cases at the edge. Tso said we can expect more demand for edge storage and spoke about an upcoming Cloudian product related to this. He also talked about how AI relates to object storage, and if Cloudian is preparing other Edgematrix-like spinoffs.

What do you think storage customers are most concerned with now?
Michael Tso: I think there is a lot, but I’ll just concentrate on two things here. One is that they continue to just need lower-cost, easier to manage and highly scalable solutions. That’s why people are shifting to cloud and looking at either public or hybrid/private.

Related to that point is I think we’re seeing a Cloud 2.0, where a lot of companies now realize the public cloud is not the be-all, end-all and it’s not going to solve all their problems. They look at a combination of cloud-native technologies and use the different tools available wisely.

I think there’s the broad brush of people needing scalable solutions and lower costs — and that will probably always be there — but the undertone is people getting smarter about private and hybrid.

Point number two is around data protection. We’re now seeing more and more customers worried about ransomware. They’re keeping backups for longer and longer and there is a strong need for write-once compliant storage. They want to be assured that any ransomware that is attacking the system cannot go back in time and mess up the data that was stored from before.

Cloudian actually invested very heavily in building write-once compliant technologies, primarily for financial and the military market because that was where we were seeing it first. Now it’s become a feature that almost everyone we talked to that is doing data protection is asking for.

People are getting smarter about hybrid and multi-cloud, but what’s the next big hurdle to implementing it?

Tso: I think as people are now thinking about a post-cloud world, one of the problems that large enterprises are coming up with is data migration. It’s not easy to add another cloud when you’re fully in one. I think if there’s any kind of innovation in being able to off-load a lot of data between clouds, that will really free up that marketplace and allow it to be more efficient and fluid.

Right now, cloud is a bunch of silos. Whatever data people have stored in cloud one is kind of what they’re stuck with, because it will take them a lot of money to move data out to cloud two, and it’s going to take them years. So, they’re kind of building strategies around that as opposed to really, truly being flexible in terms of where they keep data.

What are you seeing on the edge?

Tso: We’re continuing to see more and more data being created at the edge, and more and more use cases of the data needing to be stored close to the edge because it’s just too big to move. One classic use case is IoT. Sensors, cameras — that sort of stuff. We already have a number of large customers in the area and we’re continuing to grow in that area.

The edge can mean a lot of different things. Unfortunately, a lot of people are starting to hijack that word and make it mean whatever they want it to mean. But what we see is just more and more data popping up in all kinds of locations, with the need of having low-cost, scalable and hybrid-capable storage.

We’re working on getting a ruggedized, easy-to-deploy cloud storage solution. What we learned from Edgematrix was that there’s a lot of value to having a ruggedized edge AI device. But the unit we’re working on is going to be more like a shipping container or a truck as opposed to a little box like with Edgematrix.

What customers would need a mobile cloud storage device like you just described?

Tso: There are two distinct use cases here. One is that you want a cloud on the go, meaning it is self-contained. It means if the rest of the infrastructure around you has been destroyed, or your internet connectivity has been destroyed, you are still able to do everything you could do with the cloud. The intention is a completely isolatable cloud.

In the military application, it’s very straightforward. You always want to make sure that if the enemy is attacking your communication lines and shooting down satellites, wherever you are in the field, you need to have the same capability that you have during peak time.

But the civilian market, especially in global disaster, is another area that we are seeing demand. It’s state and local governments asking for it. In the event of a major disaster, oftentimes for a period, they don’t have any access to the internet. So the idea is to run in a cloud in a ruggedized unit that is completely stand-alone until connectivity is restored.

AI-focused Edgematrix started as a Cloudian idea. What does AI have to do with object storage?
Tso: AI is an infinite data consumer. Improvements on AI accuracy is a log scale — it’s an exponential scale in terms of the amount of data that you need for the additional improvements in accuracy. So, a lot of the reasons why people are accumulating all this data is to run their AI tools and run AI analysis. It’s part of the reason why people are keeping all their data.

Being S3 object store compatible is a really big deal because that allows us to plug into all of the modern AI workloads. They’re all built on top of cloud-native infrastructure, and what Cloudian provides is the ability to run those workloads wherever the data happens to be stored, and not have to move the data to another location.

Are you planning other Edgematrix-like spinoffs?
Tso: Not in the immediate future. We’re extremely pleased with the way Edgematrix worked out, and we certainly are open to do more of this kind of spin off.

We’re not a small company anymore, and one of the hardest things for startups in our growth stage is balancing creativity and innovation with growing the core business. We seem to have found a good sort of balance, but it’s not something that we want to do in volume because it’s a lot of work.

Go to Original Article
Author:

SwiftStack layoffs reflect change in focus to AI, analytics

Object storage specialist SwiftStack laid off employees in sales, marketing and partner relations this month, while shifting its focus to artificial intelligence, machine learning and big data analytics use cases.

The San Francisco software vendor originally concentrated on backing up and archiving unstructured data on commodity servers with its commercially supported and enhanced version of open source OpenStack Swift. SwiftStack gradually expanded into new areas over the past eight years. The vendor claims its latest 7.0 product supports clusters that can scale linearly to petabytes of data and support throughput in excess of 100 GB per second.

Seeking to differentiate

Erik Pounds, SwiftStack’s vice president of marketing, said SwiftStack will steer away from use cases such as low-cost, long-term repositories for backup applications, replacements for tape archives, and on-premises alternatives to Amazon S3 or Glacier.

Pounds said “object storage is commoditizing” in those areas.

“These are examples of good uses for object storage, and even SwiftStack, but for us to distinguish ourselves in a crowded field, we need to compete in areas where we have strong product differentiation,” Pounds said. “Tier I technology vendors are aggressively going after these types of opportunities to preserve and grow footprint, and it quickly becomes a race to the bottom.”

Pounds said SwiftStick’s new focus is on “more modern” AI, machine learning and analytics use cases — where customers need to access data across edge, core and cloud environments. That shift in focus required the company to change “outward-facing parts of the organization” in order to stay within operating budgets.

SwiftStack did not disclose the number of employees it laid off. LinkedIn indicates the company has 63 employees, but Pounds confirmed the Dec. 18 layoffs left SwiftStack “shy of 50.” He said the company still has “a healthy sales team with complete regional coverage,” despite the loss of “valued members” of the sales, marketing and partner team.

Pounds stressed that the organizational change “did not negatively affect the product and engineering team”. He said that team received additional resources. He also denied that the cuts will change SwiftStack’s product development work and release schedules.

“We continuously release new versions of SwiftStack on a three-week cadence, so once new functionality is developed and tested, it gets in the hands of our customers quickly,” Pounds said.

Azure support

In mid-December, SwiftStack 1space added support for Microsoft Azure Blob Storage to complement the product’s support for Amazon S3 and Google Cloud Storage. Pounds said more advanced Azure support would come in January with a SwiftStack 7 update.

SwiftStack 1space creates a single namespace to enable users to access, migrate and search data spanning public and on-premises cloud object systems. A new 1space File Connector extension enables users to access data stored in file systems.

Go to Original Article
Author:

Enterprise flash adoption poised for new uses, experts say

Flash storage appears poised for greater adoption in enterprise data centers during 2020.

A developing NVMe flash ecosystem and new quad-level cell (QLC) NAND SSDs will extend enterprise flash storage to workloads previously relegated to disk. Edge computing will to spur interest in storage class memory and customers will start experimenting with persistent storage for AI and containers.

Those conclusions surfaced from interviews with flash analysts and storage vendors. Here is a summary of what enterprises should watch for in 2020.

Flash isn’t only for big companies

Enterprise flash arrays first appeared around 2008, incorporating solid-state media to replace electromechanical hard disk drives. Vendors first aimed all-flash arrays at high-end enterprises that could justify the premium cost to support performance-heavy applications. Hybrid arrays that combine HDDs and SSDs followed for midrange and smaller organizations.

Flash was at such a premium in early days of enterprise use that it was limited mostly to large organizations and the most important applications. But prices consistently dropped over the past decade. Steve McDowell, a senior analyst for storage and data center technologies at Moor Insights & Strategy, said falling flash prices have made the technology a realistic option for more organizations.

“Nearline storage is pretty much all on flash right now, at least as far replacement cycles happen. Companies may not be ripping out their disk systems now, but they realize flash gives you better efficiency and density per rack unit,” McDowell said.

Prices have dropped enough that there is little difference between enterprise flash and high-performing hard disk drives, said Eric Herzog, the vice president of worldwide storage channels at IBM Storage.

“In many cases, it doesn’t pay to use disk arrays or even hybrid anymore. The price of all-flash arrays now is basically on par with high-performance disk, plus you get better total cost of ownership,” Herzog said.

NVMe set to dominate enterprise flash storage

Tiering is in vogue again

Nonvolatile memory express (NVMe) flash is expected to further make inroads in the data center. Intel Optane SSDs combine dynamic RAM and flash memory. The Optane drives are based on 3D XPoint memory technology that was initially developed by Intel and Micron Technologies, a partnership that ended in 2018. Micron in October 2019 launched its own 3D XPoint products.

Major storage vendors are adding NVMe flash to their arrays, usually in conjunction with SAS or SATA SSD connectivity. NVMe SSDs use PCI Express lanes to enable faster communication between applications and storage.

QLC is a lower-cost alternative to TLC NAND SSDs, and vendors see it as suitable for read-intensive and light write workloads. While QLC has 25% greater capacity than TLC, it has poorer performance and write endurance. Storage vendors are retooling their data management software to support QLC, NVMe and persistent memory in the same system, said Eric Burgener, a research vice president of storage at analyst firm IDC.

“We are starting to see the return of tiering for high-performance applications,” Burgener said.

Although fabrication plants are still ramping production of QLC NAND, vendors are designing systems that place a tier of superfast persistent storage on the controller, backed by a standard tier of NVMe SSDs. Examples include Dell EMC PowerMax, Hewlett Packard Enterprise Primera, the Hitachi Vantara Virtual Storage Platform VSP5000, NetApp MaxData and Pure Storage FlashArray//C.

Flash for backup, object storage

AI at the edge is also expanding flash use cases, said Sudhir Srinivasan, CTO at Dell EMC storage.

Srinivasan said more customers have moved to flash due to its operational simplicity, even for traditionally disk-based workloads.

“Most backup is still on disk, but we do have customers placing certain primary data sets on secondary devices for analytics. And that data needs a higher level of performance,” Srinivasan said.

Backup and rapid recovery is an unexpected use for Pure Storage FlashBlade all-flash NAS. The product’s massively parallel bandwidth helped Domino’s Pizza reduce dependence on disk-based backup, said Dan Djuric, a Domino’s vice president of global infrastructure and enterprise information systems. 

“Anytime we have to share file systems, we launch FlashBlade. We also use Pure FlashBlade as the framework for all our data capture, so it’s more than just backup and recovery,” Djuric said.

Flash is also penetrating converged systems, said Octavian Tanase, a senior vice president of the NetApp OnTap software and systems. NetApp FlexPod is a converged infrastructure based on NetApp FAS storage and Cisco compute and networking. Roughly 60% of FlexPod sales were for all-flash systems, Tanase said.  

McDowell said all-flash is also coming to object arrays to help enterprises analyze unstructured data created in edge environments.

“Some data never leaves the edge. Some gets consumed in the cloud, and object is the language it speaks. It’s an object-centric world and all-flash is a natural fit,” McDowell said.

Don’t write off disk yet

IDC’s Burgener said demand for NVMe all-flash will grow 67% during the next five years, with SCSI-based flash arrays growth approaching 11%. IDC expects the market for hybrid arrays to contract by 2%.

Still, enterprise flash is not in every data center. Nearly one-third of companies have no plans to install flash, according to a recent survey by 451 Research Group. The survey of nearly 500 data center administrators found that 48% of enterprises have flash, while 6% are running proofs of concept and 13% plan all-flash purchases within two years.

“There are a surprisingly high percentage of enterprises that haven’t even looked at flash yet, so there is a ways to go” before the arrival of an all-flash data center, said Tim Stammers, a senior analyst for storage at 451 Research.

Stammers said high-capacity disks are hitting the market as larger enterprises deploy converged systems and archive data to the hybrid cloud. “Disk isn’t dead,” he said. “With so much data going to the cloud, disk has a long and healthy future” as archival storage.

Go to Original Article
Author:

AWS storage changes the game inside, outside data centers

The impact of Amazon storage on the IT universe extends beyond the servers and drives that store exabytes of data on demand for more than 2.2 million customers. AWS also influenced practitioners to think differently about storage and change the way they operate.

Since Amazon Simple Storage Service (S3) launched in March 2006, IT pros have re-examined the way they buy, provision and manage storage. Infrastructure vendors have adapted the way they design and price their products. That first AWS storage service also sparked a raft of technology companies — most notably Microsoft and Google — to focus on public clouds.

“For IT shops, we had to think of ourselves as just another service provider to our internal business customers,” said Doug Knight, who manages storage and server services at Capital BlueCross in central Pennsylvania. “If we didn’t provide good customer service, performance, availability and all those things that you would expect out of an AWS, they didn’t have to use us anymore.

“That was the reality of the cloud,” Knight said. “It forced IT departments to evolve.”

The Capital BlueCross IT department became more conscious of storing data on the “right” and most cost-effective systems to deliver whatever performance level the business requires, Knight said. The AWS alternative gives users a myriad of choices, including block, file and scale-out object storage, fast flash and slower spinning disk, and Glacier archives at differing price points.

“We think more in the context of business problems now, as opposed to just data and numbers,” Knight said. “How many gigabytes isn’t relevant anymore.”

Capital BlueCross’ limited public cloud footprint consists of about 100 TB of a scale-out backup repository in Microsoft’s Azure Blob Storage and the data its software-as-a-service (SaaS) applications generate. Knight said the insurer “will never be in one cloud,” and he expects to have workloads in AWS someday. Knight said he has noticed his on-premises storage vendors have expanded their cloud options. Capital BlueCross’ main supplier, IBM, even runs its own public cloud, although Capital BlueCross doesn’t use it.

Expansion of consumption-based pricing

Facing declining revenue, major providers such as Dell EMC, Hewlett Packard Enterprise and NetApp introduced AWS-like consumption-based pricing to give customers the choice of paying only for the storage they use. The traditional capital-expense model often leaves companies overbuying storage as they try to project their capacity needs over a three- to five-year window.

While the mainstream vendors pick up AWS-like options, Amazon continues to bolster its storage portfolio with enterprise capabilities found in on-premises block-based SAN and file-based NAS systems. AWS added its Elastic Block Store (EBS) in August 2008 for applications running on Elastic Cloud Compute (EC2) instances. File storage took longer, with the Amazon Elastic File System (EFS) arriving in 2016 and FSx for Lustre and Windows File Server in 2018.

AWS ventured into on-premises hardware in 2015 with a Snowball appliance to help businesses ship data to the cloud. In late 2019, Amazon released Outposts hardware that gives customers storage, compute and database resources to build on-premises applications using the same AWS tools and services that are available in the cloud.

Amazon S3 API impact

Amid the ever-expanding breadth of offerings, it’s hard to envision any AWS storage option approaching the popularity and influence of the first one. Simple Storage Service, better known as S3, stores objects on cheap, commodity servers that can scale out in seemingly limitless fashion. Amazon did not invent object storage, but its S3 application programming interface (API) has become the de facto industry standard.

“It forced IT to look at redesigning their applications,” Gartner research vice president Julia Palmer said of S3.  

Amazon storage timeline
AWS storage has grown from the object-based Simple Storage Service (S3) to include block, file, archival and on-premises options.

Palmer said when she worked in engineering at GoDaddy, the Internet domain registrar and service provider designed its own object storage to talk to various APIs. But the team members gradually realized they would need to focus on the S3 API that everyone else was going to use, Palmer said.

Every important storage vendor now supports the S3 API to facilitate access to object storage. Palmer said that, although object systems haven’t achieved the level of success on premises that they have in the cloud, the idea that storage can be flexible, infinitely scalable and less costly by running on commodity hardware has had a dramatic impact on the industry.

“Before, it was file or block,” she said. “And that was it.”

Object storage use cases expand

Because of higher performance storage emerging in the cloud and on premises, object storage is expanding beyond the original backup and archiving use cases to workloads such as big data analytics. For instance, Pure Storage and NetApp sell all-flash hardware for object storage, and object software pioneer SwiftStack improves throughput through parallel I/O.

Enrico Signoretti, a senior data storage analyst at GigaOm, said he fields calls every day from IT pros who want to use object storage for more use cases.

“Everyone is working to make object storage faster,” Signoretti said. “It’s growing like crazy.”

Major League Baseball (MLB) is trying to get its developers to move away from files and write to S3 buckets, as it plans a 10- to 20-PB open source Ceph object storage cluster. Truman Boyes, MLB’s SVP of infrastructure, said developers have been working with files for so long that it will take time to convince them that the object approach could be easier. 

“From an application designer’s perspective, they don’t have to think about how to have resilient storage. They don’t have to worry if they’ve copied it to the right number of places and built in all these mechanisms to ensure data integrity,” Boyes said. “It just happens. You talk to an API, and the API figures it out for you.”

Ken Rothenberger, an enterprise architect at General Mills, said Amazon S3 object storage significantly influenced the way he thinks about data durability. Rothenberger said the business often mandates zero data loss, and traditional block storage requires the IT department to keep backups and multiple copies of data.

AWS storage challengers

By contrast, AWS S3 and Glacier stripe data across at least three facilities located 10 km to 60 km away from each other and provide 99.999999999% durability. Amazon technology VP Bill Vass said the 10 km distance is to withstand an F5 tornado that is 5 km wide, and the 60 km is for speed-of-light latency. “Certainly none of the other cloud providers do it by default,” Vass said.

Startup Wasabi Technologies claims to provide 99.999999999% durability through a different technology approach, and takes aim at Amazon S3 Standard on price and performance. Wasabi eliminated data egress fees to target one of the primary complaints of AWS storage customers.

Vass countered that egress charges pay for the networking gear that enables access at 8.8 terabits per second on S3. He also noted that AWS frequently lowers storage prices, just as it does across the board for all services.

“You don’t usually get that aggressive price reduction from on-prem [options], along with the 11 nines durability automatically spread across three places,” Vass said.

Amazon’s shortcomings in block and file storage have given rise to a new market of “cloud-adjacent” storage providers, according to Marc Staimer, president of Dragon Slayer Consulting. Staimer said Dell EMC, HPE, Infinidat and others put their storage into facilities located within close proximity of AWS compute nodes. They aim to provide a “faster, more scalable, more secure storage” alternative to AWS, Staimer said.

But the most serious cloud challengers for AWS storage remain Azure and Google. AWS also faces on-premises challenges from traditional vendors that provide the infrastructure for data centers where many enterprises continue to store most of their data.

Cloud vs. on-premises costs

Jevin Jensen, VP of global infrastructure at Mohawk Industries, said he tracks the major cloud providers’ prices and keeps an open mind. But at this point in time, he finds that his company is able to keep its “fully loaded” costs at least 20% lower by running its SAP, payroll, warehouse management and other business-critical applications in-house, with on-premises storage.

Jensen said the cost delta between the cloud and Mohawk’s on-premises data center was initially about 50%, leaving him to wonder, “Why are we even thinking about cloud?” He said the margin dropped to 20% or 30% as AWS and the other cloud providers reduced their prices.

Like many enterprises, Mohawk uses the public cloud for SaaS applications and credit card processing. The Georgia-based global flooring manufacturer also has Azure for e-commerce. Jensen said the mere prospect of moving more workloads and data off-site enables Mohawk to secure better discounts from its infrastructure suppliers.

“They know we have stuff in Azure,” Jensen said. “They know we can easily go to Amazon.”

Go to Original Article
Author: