Tag Archives: storage

Nutanix Objects 2.0 lays groundwork for hybrid cloud

Nutanix pitched its latest Objects update as scale-out object storage workloads, but experts said the hyper-converged infrastructure specialist is likely preparing for a push into the cloud.

Nutanix Objects 2.0 introduced features aimed at big data workloads. The new multicluster support consolidates Nutanix clusters to a single namespace, allowing for simpler, single-console management. Nutanix Objects 2.0 also added a 240 TB node that is larger than any other Nutanix HCI node, allowing more capacity per cluster. The update also added WORM support and Splunk certification.

Nutanix Objects, which launched in August 2019, provides software-defined object storage and is a stand-alone product from Nutanix HCI. Greg Smith, vice president of product marketing at Nutanix, said typical use cases included unstructured data archiving, big data and analytics. He also noticed an uptick in cloud-native application development in AWS S3 environments. Nutanix HCI software uses an S3 interface.

“We see increasing demand for object storage, particularly for big data,” Smith said.

Supporting cloud-native development is the real endgame of Nutanix Objects 2.0, said Eric Slack, senior analyst at Evaluator Group. The new features and capabilities aren’t to capture customers with traditional object storage use cases, because it’s not cost-effective to put multiple petabytes on HCI. He said no one is waiting for an S3 interface before buying into HCI.

However, that S3 interface is important because, according to Slack, “S3 is what cloud storage talks.”

Slack believes the enhancements to Nutanix Objects will lay the groundwork for Nutanix Clusters, which is currently in beta. Nutanix Clusters allows Nutanix HCI to run in the cloud and communicate with Nutanix HCI running in the data center. This means organizations can develop applications on-site and run them in the cloud, or vice versa.

“I think that’s why they’re doing this — they’re getting ready for Nutanix Clusters,” Slack said. “This really plays into their cloud design, which is a good idea.”

Organizations want that level of flexibility right now because they do not know which workloads are more cost-efficient to run on premises or in the cloud. Having that same, consistent S3 interface is ideal for IT, Slack said, because it means their applications will run wherever it’s cheaper.

Some organizations have been burned during cloud’s initial hype and moved many of their workloads there, only to find their costs have gone up. Slack said that has led to repatriation back into data centers as businesses do the cost analysis.

“Cloud wasn’t everything we thought it was,” Slack said.

Scott Sinclair, senior analyst at Enterprise Strategy Group (ESG), came to a similar conclusion about the importance of Nutanix Objects. HCI is about consolidating and simplifying server, network and storage, and Objects expands Nutanix HCI into covering object storage’s traditional use cases: archive and active archive. However, there are growing use cases centered around developing in S3.

“We’re seeing the development of apps that write to an S3 API that may not be what we classify as traditional archive,” Sinclair said.

Screenshot of Nutanix Objects 2.0
Nutanix’s S3 interface streamlines interaction between on premises and cloud.

Citing ESG’s 2020 Technology Spending Intentions survey, Sinclair said 64% of IT decision-makers said their IT environments are more complex than they were years ago. Coupled with other data pointing to skills shortages in IT, Sinclair said organizations are currently looking for ways to simplify their data centers, resulting in interest in HCI.

That same survey also found 24% of respondents said they needed to go hybrid, with the perception that using the cloud is easier. Sinclair said this will logically lead to an increase in the use of S3 protocols, and why Nutanix Objects is uniquely well-positioned. Right now, IT administrators know they need to be using both on-premises and cloud resources, but they don’t know to what extent they should be using either. That’s why businesses are taking the most flexible approach.

“Knowing that you don’t know is a smart position to take,” Sinclair said.

Go to Original Article
Author:

Nvidia scoops up object storage startup SwiftStack

Nvidia plans to acquire object storage vendor SwiftStack to help its customers accelerate their artificial intelligence, high-performance computing and data analytics workloads.

The GPU vendor, based in Santa Clara, Calif., will not sell SwiftStack software but will use SwiftStack’s 1space as part of its internal artificial intelligence (AI) stack. It will also enable customers to use the SwiftStack software as part of their AI stacks, according to Nvidia’s head of enterprise computing, Manuvir Das.

SwiftStack and Nvidia disclosed the acquisition today. They did not reveal the purchase price, but they said it expects the deal to close with weeks.

Nvidia previously worked with SwiftStack

Nvidia worked with San Francisco-based SwiftStack for more than 18 months on tackling the data challenges associated with running AI applications at a massive scale. Nvidia found 1space particularly helpful. SwiftStack introduced 1space in 2018 to accelerate data access across public and private clouds through a single object namespace.

“Simply put, it’s a way of placing the right data in the right place at the right time, so that when the GPU is busy, the data can be sent to it quickly,” Das said.

Das said Nvidia customers would be able to use enterprise storage from any vendor. The SwiftStack 1space technology will form the “storage orchestration layer” that sits between the compute and the storage to properly place the data so the AI stack runs optimally, Das said.

“We are not a storage vendor. We do not intend to be a storage vendor. We’re not in the business of selling storage in any form,” Das said. “We work very closely with our storage partners. This acquisition is designed to further the integration between different storage technologies and the work we do for AI.”

We are not a storage vendor. We do not intend to be a storage vendor.
Manuvir DasHead of enterprise computing, Nvidia

Nvidia partners with storage vendors such as Pure Storage, NetApp, Dell EMC and IBM. The storage vendors integrate Nvidia GPUs into their arrays or sell the GPUs along with their storage in reference architectures.

Nvidia attracted to open source tech

Das said Nvidia found SwiftStack attractive because its software is based on open source technology. SwiftStack’s eponymous object- and file-based storage and data management software is rooted in open source OpenStack Swift. Das said Nvidia plans to continue to work with the SwiftStack team to advance and optimize the technology and make it available through open source avenues.

“The SwiftStack team is part of Nvidia now,” he said. “They’re super talented. So, the innovation will continue to happen, and all that innovation will be upstreamed into the open source SwiftStack. It will be available to anybody.”

Joe ArnoldJoe Arnold

SwiftStack laid off an undisclosed number of sales and marketing employees in late 2019, but kept the engineering and support team intact, according to president Joe Arnold. He attributed the layoffs to a shift in sales focus from classic backup and archiving to AI, machine learning and data analytics use cases.

The SwiftStack 7.0 software update that emerged late last year took aim at analytics HPC, AI and ML use cases, such as autonomous vehicle applications that feed data to GPU-based servers. SwiftStack said at the time that it had worked with customers to design clusters that could scale to handle multiple petabytes of data and support throughput in excess of 100 GB per second.

Das said Nvidia has been using SwiftStack’s object storage technology as well as 1space. He said Nvidia’s internal work on data science and AI applications had quickly showed the company that accelerating the computer shifts the bottleneck elsewhere, to the storage. That played a factor in Nvidia’s acquisition of SwiftStack, he noted.

“We recognized a long time ago that the way to help the customers is not just to provide them a GPU or a library, but to help them create the entire stack, all the way from the GPU up to the applications themselves. If you look at Nvidia now, we spend most of our energy on the software for different kinds of AI applications,” Das said.

He said Nvidia would fully support SwiftStack’s customer base. SwiftStack claims it has around 125 customers. It products lineup included SwiftStack’s object storage software, ProxyFS file system for integrated file and object API access, and 1space. SwiftStack’s software is designed to run on commodity hardware on premises, and its 1space technology can run in the public cloud.

SwiftStack spent more than eight years expanding its software’s capabilities since the company’s 2011 founding. Das said Nvidia has no reason to sell the SwiftStack’s proprietary software because it does not compete head-to-head with other object storage providers.

“Our philosophy here at Nvidia is we are not trying to compete with infrastructure vendors by selling some kind of a stack that competes with other peoples’ stacks,” Das said. “Our goal is simply to make people successful with AI. We think, if that happens, everybody wins, including Nvidia, because we believe GPUs are the best platform for AI.”

Go to Original Article
Author:

HCI market grows as storage, servers shrink

Storage and server revenue keep declining sharply. Much of that money is going to public clouds but also to hyper-converged infrastructure systems that combine storage and servers.

Dell EMC, NetApp and Hewlett Packard Enterprise (HPE) all reported declines in storage revenue for their most recent quarters. IBM storage inched up 3% after quarters of decline. Pure Storage grew revenue 17% last quarter, but that’s a far cry from its growth in the 30% range as recently as mid-2018.

Naveen Chhabra, Forrester Research senior analyst for servers and operations, said the cloud and hyper-converged infrastructure (HCI) are taking a toll on traditional storage.

“The entire storage market is in trouble,” Chhabra said. “Every storage vendor has declining revenue. The only one that has shown growth is Pure. Storage investment is happening in the cloud, and the rest of storage is under tremendous pressure.”

Chhabra said he expects the HCI market will remain strong and continue to eat into storage and server sales. “There’s no stopping that,” he said. “Everything, including storage, eventually ends up deployed on the server.”

Dell and HPE sell servers and storage, and best show the trend from those technologies to hyper-converged.

The entire storage market is in trouble.
Naveen ChhabraSenior analyst, Forrester Research

HCI market remains extensive

Dell EMC’s storage revenue fell 3% to $4.5 billion last quarter, and servers and networking declined 19% to $4.3 billion. However, COO Jeff Clarke said hyper-converged revenue grew by more than 10%, mainly thanks to its VMware vSAN-powered VxRail product.

HPE storage declined 0.5% to $1.25 billion and compute fell 10% to $3 billion, but its SimpliVity HCI revenue ticked up by 6%.

At the same time, the leading HCI software vendors increased revenue.

Nutanix revenue grew 21% since last year to $347 million, and its billings increased 4% to $428 million. Dell-owned VMware’s vSAN bookings increased “in the mid-teens” according to the vendor. Both Nutanix and VMware claim they would have grown HCI revenue more, but they have switched to subscription licensing that decreases upfront revenue.

HPE actually picked up more HCI hardware customers through Nutanix, which now sells its software stack on HPE ProLiant servers as part of an OEM deal signed in 2019.

Nutanix said its DX Series consisting of Nutanix software on HPE servers accounted for 117 new customers in its first full quarter of the partnership. Nutanix CEO Dheeraj Pandey said those deals included a $4 million subscription deal with a financial services company, and $1 million deal with another financial services firm.

“HPE is becoming a pretty substantial portion” of Nutanix business, Pandey said. “It’s looking like a win-win for both sides.”

HPE is also offering Nutanix software-as-a-service through its GreenLake program, but it has not disclosed any numbers of those deals.

Pandey said while Nutanix sells HPE servers with its software, many deals come through recommendations from HPE. The Nutanix software stack includes something HPE’s SimpliVity HCI software lacks: a built-in hypervisor. Nutanix’s AHV hypervisor gives customers an alternative to VMware virtualization.

“We have big customers out there who like HPE, and they’d like to consume Nutanix software on HPE servers,” Pandey said. “We’re one of the few companies that deliver the full stack, including HCI, databases, end-user computing and automation. Our largest customers are AHV customers; they’re full-stack customers on Nutanix. We can run on top of Dell servers, HPE servers, our own white box servers, and we can take our software to the public cloud.”

Dell, VMware HCI market leaders

According to the most recent IDC hyper-converged market tracker for the third quarter of 2019, Dell led in systems revenue with a 35.1% share, followed by Nutanix at 13%, Cisco with 5.4%, HPE at 4.6% and Lenovo at 4.5%. IDC recognizes HCI software separately, with VMware at No. 1 with 38% share followed by Nutanix at 27.2%.

Dell still sells Nutanix software on PowerEdge servers as part of a deal that predates Dell’s acquisition of EMC (which included VMware) but focuses more on pushing VxRail systems with vSAN.

Chhabra said Dell recognized the HCI trend well before HPE and rode that to the HCI market lead. He said he sees Nutanix and HPE growing closer to help battle the Dell-VMware HCI combination.

“How does HPE compete with Dell plus VMware?” he said. “Here comes a strong partner in Nutanix, which can give HPE a like-to-like competitor to Dell. Do you have a hypervisor, do you have infrastructure, do you have storage? That’s what the Dell-VMware combination is, and now HPE has that.”

Dell CFO Thomas Sweet said he expects HCI to continue as Dell EMC’s fastest growing storage segment through this year.

“We’ve had great success with our VxRail product,” Sweet said on Dell’s earnings call last week. “We’ve seen softness in the core [storage] array business. That infrastructure space has been soft.”

Go to Original Article
Author:

Investments in data storage vendors topped $2B in 2019

Data storage vendors received $2.1 billion in private funding in 2019, according to SearchStorage.com analysis of data from websites that track venture funding. Not surprisingly, startups in cloud backup, data management and ultrafast scale-out flash continue to attract the greater interest from private investors.

Six private data storage vendors closed funding rounds over more than $100 million in 2019, all in the backup/cloud sector. It’s a stretch to call most of these startups — all but one of the companies have been selling products for years.

A few vendors with disruptive storage hardware also got decent chunks of money to build out arrays and storage systems, although these rounds were much smaller than the data protection vendors received.

According to a recent report by PwC/ CB Insights MoneyTree, 213 U.S.-based companies closed funding rounds of at least $100 million last year. The report pegged overall funding for U.S. companies at nearly $108 billion, down 9% year on year but well above the $79 billion total from 2017.

Despite talk of a slowing global economy, data growth is expected to accelerate for years to come. And as companies mine new intelligence from older data, data centers need more storage and better management than ever. The funding is flowing more to vendors that manage that data than to systems that store it.

“Investors don’t lead innovation; they follow innovation. They see a hot area that looks like it’s taking off, and that’s when they pour money into it,” said Marc Staimer, president of Dragon Slayer Consulting in Beaverton, Ore.

Here is a glance at the largest funding rounds by storage companies in 2019, starting with software vendors:

Kaseya Limited, $500 million: Investment firm TPG will help Kaseya further diversify the IT services it can offer to manage cloud providers. Kaseya has expanded into backup in recent years, adding web-monitoring software ID Agent last year. That deal followed earlier pickups of Cloud Spanning Apps and Unitrends.

Veeam Software, $500 million: Veeam pioneered backup of virtual machines and serves many Fortune 500 companies. Insight Partners invested half of a billion dollars in Veeam in January 2019, and followed up by buying Veeam outright in January 2020 for a $5 billion valuation. That may lead to an IPO. Veeam headquarters are shifting to the U.S. from Switzerland, and Insight plans to focus on landing more U.S. customers.

Rubrik, $261 million: The converged storage vendor has amassed $553 million since launching in 2014. The latest round of Bain Capital investment reportedly pushed Rubrik’s valuation north of $3 billion. Flush with investment, Rubrik said it’s not for sale — but is shopping to acquire hot technologies, including AI, data analytics and machine learning.

Clumio, $175 million: Sutter Hill Ventures provided $40 million in April, on top of an $11 million 2017 round. It then came back for another $135 million bite in November, joined by Altimeter Capital. Clumio is using the money to add cybersecurity to its backup as a service in Amazon Web Services.

Acronis, $147 million: Acronis was founded in 2003, so it’s halfway into its second decade. But the veteran data storage vendor has a new focus of backup blended with cybersecurity and privacy, similar to Clumio. The Goldman Sachs-led funding helped Acronis acquire 5nine to manage data across hybrid Microsoft clouds.

Druva, $130 million: Viking Global Investors led a six-participant round that brought Druva money to expand its AWS-native backup and disaster recovery beyond North America to international markets. Druva since has added low-cost tiering to Amazon Glacier, and CEO Jaspreet Singh has hinted Druva may pursue an IPO.

Notable 2019 storage funding rounds

Data storage startups in hardware

Innovations in storage hardware underscore the ascendance of flash in enterprise data centers. Although fewer in number, the following storage startups are advancing fabrics-connected devices for high-performance workloads.

Over time, these data storage startups may mature to be able to deliver hardware that blends low latency, high IOPS and manageable cost, emerging as competitors to leading array vendors. For now, these products will have limited market to companies that needs petabytes (PB) (or more) of storage, but the technologies bear watching due to their speed, density and performance potential.

Lightbits Labs, $50 million: The Israel-based startup created the SuperSSD array for NVMe flash. The Lightbits software stack converts generic in-the-box TCP/IP into a switched Ethernet fabric, presenting all storage as a single giant SSD. SuperSSD starts at 64 PB before data reduction. Dell EMC led Lightbits’ funding, with contributions from Cisco and Micron Technology.

Vast Data, $40 million: Vast’s Universal Storage platform is not for everyone. Minimum configuration starts at 1 PB. Storage class memory and low-cost NAND are combined for unified block, file and object storage. Norwest Venture Partners led the round, with participation from Dell Technologies Capital and Goldman Sachs.

Honorable mentions in hardware include Pavilion Data Systems and Liqid. Pavilion is one of the last remaining NVMe all-flash startups, picking up $25 million in a round led by Taiwania Capital and RPS Ventures to flesh out its Hyperparallel Flash Array.

Liqid is trying to break into composable infrastructure, a term coined by Hewlett Packard Enterprise to signify the ability for data centers to temporarily lease capacity and hardware by the rack. Panorama Point Partners provided $28 million to help the startup flesh out its Liqid CI software platform.

Go to Original Article
Author:

For Sale – Huawei Matebook X Pro – i7, 512GB, MX150

I am selling my Huawei Matebook X Pro, i7, 8GB RAM, 512GB Storage, GPU MX150.
It is in excellent condition and I can’t find any scratches or dings on it anywhere.

I bought it from the Microsoft Store so I have been the only owner, and I purchased it on 21st November 2018.
It comes in the original box, with the original charger and the HDMI accessory that came with it.

The only reason for sale is that I was travelling a lot with work at the time and since that has died down, I’ve built a desktop.

This is an excellent laptop with a great display and battery life has never let me down.

Go to Original Article
Author:

Red Hat OpenShift Container Storage seeks to simplify Ceph

The first Red Hat OpenShift Container Storage release to use multiprotocol Ceph rather than the Gluster file system to store application data became generally available this week. The upgrade comes months after the original late-summer target date set by open source specialist Red Hat.

Red Hat — now owned by IBM — took extra time to incorporate feedback from OpenShift Container Storage (OCS) beta customers, according to Sudhir Prasad, director of product management in the company’s storage and hyper-converged business unit.

The new OCS 4.2 release includes Rook Operator-driven installation, configuration and management so developers won’t need special skills to use and manage storage services for Kubernetes-based containerized applications. They indicate the capacity they need, and OCS will provision the available storage for them, Prasad said.

Multi-cloud support

OCS 4.2 also includes multi-cloud support, through the integration of NooBaa gateway technology that Red Hat acquired in late 2018. NooBaa facilitates dynamic provisioning of object storage and gives developers consistent S3 API access regardless of the underlying infrastructure.

Prasad said applications become portable and can run anywhere, and NooBaa abstracts the storage, whether AWS S3 or any other S3-compatible cloud or on-premises object store. OCS 4.2 users can move data between cloud and on-premises systems without having to manually change configuration files, a Red Hat spokesman added.

Customers buy OCS to use with the Red Hat OpenShift Container Platform (OCP), and they can now manage and monitor the storage through the OCP console. Kubernetes-based OCP has more than 1,300 customers, and historically, about 40% to 50% attached to OpenShift Container Storage, a Red Hat spokesman said. OCS had about 400 customers in May 2019, at the time of the Red Hat Summit, according to Prasad.

One critical change for Red Hat OpenShift Container Storage customers is the switch from file-based Gluster to multiprotocol Ceph to better target data-intensive workloads such as artificial intelligence, machine learning and analytics. Prasad said Red Hat wanted to give customers a more complete platform with block, file and object storage that can scale higher than the product’s prior OpenStack S3 option. OCS 4.2 can support 5,000 persistent volumes and will support 10,000 in the upcoming 4.3 release, according to Prasad.

Migration is not simple

Although OCS 4 may offer important advantages, the migration will not be a trivial one for current customers. Red Hat provides a Cluster Application Migration tool to help them move applications and data from OCP 3/OCS 3 to OCP 4/OCS 4 at the same time. Users may need to buy new hardware, unless they can first reduce the number of nodes in their OpenShift cluster and use the nodes they free up, Prasad confirmed.

“It’s not that simple. I’ll be upfront,” Prasad said, commenting on the data migration and shift from Gluster-based OCS to Ceph-backed OCS. “You are moving from OCP 3 to OCP 4 also at the same time. It is work. There is no in-place migration.”

One reason that Red Hat put so much emphasis on usability in OCS 4.2 was to abstract away the complexity of Ceph. Prasad said Red Hat got feedback about Ceph being “kind of complicated,” so the engineering team focused on simplifying storage through the operator-driven installation, configuration and management.

“We wanted to get into that mode, just like on the cloud, where you can go and double-click on any service,” Prasad said. “That took longer than you would have expected. That was the major challenge for us.”

OpenShift Container Storage roadmap

The original OpenShift Container Storage 4.x roadmap that Red Hat laid out last May at its annual customer conference called for a beta release in June or July, OCS 4.2 general availability in August or September, and a 4.3 update in December 2019 or January 2020. Prasad said February is the new target for the OCS 4.3 release.

The OpenShift Container Platform 4.3 update became available this week, with new security capabilities such as Federal Information Processing Standard (FIPS)-compliant encryption. Red Hat eventually plans to return to its prior practice of synchronizing new OCP and OCS releases, said Irshad Raihan, the company’s director of storage product marketing.

The Red Hat OpenShift Container Storage 4.3 software will focus on giving customers greater flexibility, such as the ability to choose the type of disk they want, and additional hooks to optimize the storage. Prasad said Red Hat might need to push its previously announced bare-metal deployment support from OCS 4.3 to OCS 4.4.

OCS 4.2 supports converged-mode operation, with compute and storage running on the same node or in the same cluster. The future independent mode will let OpenShift use any storage backend that supports the Container Storage Interface. OCS software would facilitate access to the storage, whether it’s bare-metal servers, legacy systems or public cloud options.

Alternatives to Red Hat OpenShift Container Storage include software from startups Portworx, StorageOS, and MayaData, according to Henry Baltazar, storage research director at 451 Research. He said many traditional storage vendors have added container plugins to support Kubernetes. The public cloud could appeal to organizations that don’t want to buy and manage on-premises systems, Baltazar added.

Baltazar advised Red Hat customers moving from Gluster-based OCS to Ceph-based OCS to keep a backup copy of their data to restore in the event of a problem, as they would with any migration. He said any users who are moving a large data set to public cloud storage needs to factor in network bandwidth and migration time and consider egress changes if they need to bring the data back from the cloud.

Go to Original Article
Author:

Major storage vendors map out 2020 plans

The largest enterprise storage vendors face a common set of challenges and opportunities heading into 2020. As global IT spending slows and storage gets faster and frequently handles data outside the core data center, primary storage vendors must turn to cloud, data management and newer flash technologies.

Each of the major storage vendors has its own plans for dealing with these developments. Here is a look at what the major primary storage vendors did in 2019 and what you can expect from them in 2020.

Dell EMC: Removing shadows from the clouds

2019 in review: Enterprise storage market leader Dell EMC spent most of 2019 bolstering its cloud capabilities, in many cases trying to play catch-up. New cloud products include VMware-orchestrated Dell EMC Cloud Platform arrays that integrate Unity and PowerMax storage, coupled with VxBlock converged and VxRail hyper-converged infrastructure.

The new Dell EMC Cloud gear allows customers to build and deploy on-premises private clouds with the agility and scale of the public cloud — a growing need as organizations dive deeper into AI and DevOps.

What’s on tap for 2020: Dell EMC officials have hinted at a new Power-branded midrange storage system for several years, and a formal unveiling of that product is expected in 2020. Then again, Dell initially said the next-generation system would arrive in 2019. Customers with existing Dell EMC midrange storage likely won’t be forced to upgrade, at least not for a while. The new storage platform will likely converge features from Dell EMC Unity and SC Series midrange arrays with an emphasis on containers and microservices.

Dell will enhance its tool set for containers to help companies deploy microservices, said Sudhir Srinivasan, the CTO of Dell EMC storage. He said containers are a prominent design featured in the new midrange storage. 

“Software stacks that were built decades ago are giant monolithic pieces of code, and they’re not going to survive that next decade, which we call the data decade,” Srinivasan said. 

Hewlett Packard Enterprise’s eventful year

2019 in review: In terms of product launches and partnerships, Hewlett Packard Enterprise (HPE) had a busy year in 2019. HPE Primera all-flash storage arrived in late 2019,  and HPE expects customers will slowly transition from its flagship 3PAR platform. Primera supports NVMe flash, embedding custom chips in the chassis to support massively parallel data transport on PCI Express lanes. The first Primera customer, BlueShore Financial, received its new array in October.

HPE bought supercomputing giant Cray to expand its presence in high-performance computing, and made several moves to broaden its hyper-converged infrastructure options. HPE ported InfoSight analytics to HPE SimpliVity HCI, as part of the move to bring the cloud-based predictive tools picked up from Nimble Storage across all HPE hardware. HPE launched a Nimble dHCI disaggregated HCI product and partnered with Nutanix to add Nutanix HCI technology to HPE GreenLake services while allowing Nutanix to sell its software stack on HPE servers.

It capped off the year with HPE Container Platform, a bare-metal system to make it easier to spin up Kubernetes-orchestrated containers on bare metal. The Container Platform uses technology from recent HPE acquisitions MapR and BlueData.

What’s on tap for 2020: HPE vice president of storage Sandeep Singh said more analytics are coming in response to customer calls for simpler storage. “An AI-driven experience to predict and prevent issues is a big game-changer for optimizing their infrastructure. Customers are placing a much higher priority on it in the buying motion,” helping to influence HPE’s roadmap, Singh said.

It will be worth tracking the progress of GreenLake as HPE moves towards its goal of making all of its technology available as a service by 2022.

Hitachi Vantara: Renewed focus on traditional enterprise storage

2019 in review: Hitachi Vantara renewed its focus on traditional data center storage, a segment it had largely conceded to other array vendors in recent years. Hitachi underwent a major refresh of the Hitachi Virtual Storage Platform (VSP) flash array in 2019. The VSP 5000 SAN arrays scale to 69 PB of raw storage, and capacity extends higher with hardware-based deduplication in its Flash Storage Modules. By virtualizing third-party storage behind a VSP 5000, customers can scale capacity to 278 PB.

What’s on tap for 2020: The VSP5000 integrates Hitachi Accelerated Fabric networking technology that enables storage to scale out and scale up. Hitachi this year plans to phase in the networking to other high-performance storage products, said Colin Gallagher, a Hitachi vice president of infrastructure products.

“We had been lagging in innovation, but with the VSP5000, we got our mojo back,” Gallagher said.

Hitachi arrays support containers, and Gallagher said the vendor is considering whether it needs to evolve its support beyond a Kubernetes plugin, as other vendors have done. Hitachi plans to expand data management features in Hitachi Pentaho analytics software to address AI and DevOps deployments. Gallagher said Hitachi’s data protection and storage as a service is another area of focus for the vendor in 2020.

IBM: hybrid cloud, with cyber-resilient storage

2019 in review: IBM brought out the IBM Elastic Storage Server 3000, an NVMe-based array packaged with IBM Spectrum Scale parallel file storage. Elastic Storage Server 3000 combines NVMe flash and containerized software modules to provide faster time to deployment for AI, said Eric Herzog, IBM’s vice president of world storage channels.

In addition, IBM added PCIe-enabled NVMe flash to Versastack converged infrastructure and midrange Storwize SAN arrays.

What to expect in 2020: Like other storage vendors, IBM is trying to navigate the unpredictable waters of cloud and services. Its product development revolves around storage that can run in any cloud. IBM Cloud Services enables end users to lease infrastructure, platforms and storage hardware as a service. The program has been around for two years, and will add IBM software-defined storage to the mix this year. Customers thus can opt to purchase hardware capacity or the IBM Spectrum suite in an OpEx model. Non-IBM customers can run Spectrum storage software on qualified third-party storage.

“We are going to start by making Spectrum Protect data protection available, and we expect to add other pieces of the Spectrum software family throughout 2020 and into 2021,” Herzog said.

Another IBM development to watch in 2020 is how its $34 billion acquisition of Red Hat affects either vendor’s storage products and services.

NetApp: Looking for a rebound

2019 in review: Although spending slowed for most storage vendors in 2019, NetApp saw the biggest decline. At the start of 2019, NetApp forecast annual sales at $6 billion, but poor sales forced NetApp to slash its guidance by around 10% by the end of the year.

NetApp CEO George Kurian blamed the revenue setbacks partly on poor sales execution, a failing he hopes will improve as NetApp institutes better training and sales incentives. The vendor also said goodbye to several top executives who retired, raising questions about how it will deliver on its roadmap going forward.

What to expect in 2020: In the face of the turbulence, Kurian kept NetApp focused on the cloud. NetApp plowed ahead with its Data Fabric strategy to enable OnTap file services to be consumed, via containers, in the three big public clouds.  NetApp Cloud Data Service, available first on NetApp HCI, allows customers to consume OnTap storage locally or in the cloud, and the vendor capped off the year with NetApp Keystone, a pay-as-you-go purchasing option similar to the offerings of other storage vendors.

Although NetApp plans hardware investments, storage software will account for more revenue as companies shift data to the cloud, said Octavian Tanase, senior vice president of the NetApp OnTap software and systems group.

“More data is being created outside the traditional data center, and Kubernetes has changed the way those applications are orchestrated. Customers want to be able to rapidly build a data pipeline, with data governance and mobility, and we want to try and monetize that,” Tanase said.

Pure Storage: Flash for backup, running natively in the cloud

2019 in review: The all-flash array specialist broadened its lineup with FlashArray//C SAN arrays and denser FlashBlade NAS models. FlashArray//C extends the Pure Storage flagship with a model that supports Intel Optane DC SSD-based MemoryFlash modules and quad-level cell NAND SSDs in the same system.

Pure also took a major step on its journey to convert FlashArray into a unified storage system by acquiring Swedish file storage software company Compuverde. It marked the second acquisition in as many years for Pure, which acquired deduplication software startup StorReduce in 2018.

What to expect in 2020: The gap between disk and flash prices has narrowed enough that it’s time for customers to consider flash for backup and secondary workloads, said Matt Kixmoeller, Pure Storage vice president of strategy.

“One of the biggest challenges — and biggest opportunities — is evangelizing to customers that, ‘Hey, it’s time to look at flash for tier two applications,'” Kixmoeller said.

Flexible cloud storage options and more storage in software are other items on Pure’s roadmap items. Cloud Block Store, which Pure introduced last year, is just getting started, Kixmoeller said, and is expected to generate lots of attention from customers. Most vendors support Amazon Elastic Block Storage by sticking their arrays in a colocation center and running their operating software on EBS, but Pure took a different approach. Pure reengineered the backend software layer to run natively on Amazon S3.

Go to Original Article
Author:

For Sale – Huawei Matebook X Pro – i7, 512GB, MX150

I am selling my Huawei Matebook X Pro, i7, 8GB RAM, 512GB Storage, GPU MX150.
It is in excellent condition and I can’t find any scratches or dings on it anywhere.

I bought it from the Microsoft Store so I have been the only owner, and I purchased it on 21st November 2018.
It comes in the original box, with the original charger and the HDMI accessory that came with it.

The only reason for sale is that I was travelling a lot with work at the time and since that has died down, I’ve built a desktop.

This is an excellent laptop with a great display and battery life has never let me down.

Go to Original Article
Author:

Box vs. Dropbox outages in 2019

In this infographic, we present a timeline of significant service disruptions in 2019 for Box vs. Dropbox.

Box vs. Dropbox outages in 2019

Cloud storage providers Box and Dropbox self-report service disruptions throughout each year. In 2019, Dropbox posted publicly about eight incidents; Box listed more than 50. But the numbers don’t necessarily provide an apples-to-apples comparison, because each company gets to choose which incidents to disclose.

This infographic includes significant incidents that prevented users from accessing Box or Dropbox in 2019, or at least from uploading and downloading documents. It excludes outages that appeared to last 10 minutes or fewer, as well as incidents labeled as having only “minor” or “medium” impact.

To view the full list of 2019 incidents for Box vs. Dropbox, visit status.box.com and status.dropbox.com

Go to Original Article
Author:

Cloudian CEO: AI, IoT drive demand for edge storage

AI and IoT is driving demand for edge storage as data is being created faster than it can be reasonably moved across clouds, object storage vendor Cloudian’s CEO said.

Cloudian CEO Michael Tso said “Cloud 2.0” is giving rise to the growing importance of edge storage among other storage trends. He said customers are getting smarter about how they use the cloud, and that’s leading to growing demand for products that can support private and hybrid clouds. He also detects an increased demand for resiliency against ransomware attacks.

We spoke with Tso about these trends, including the Edgematrix subsidiary Cloudian launched in September 2019 that focuses on AI use cases at the edge. Tso said we can expect more demand for edge storage and spoke about an upcoming Cloudian product related to this. He also talked about how AI relates to object storage, and if Cloudian is preparing other Edgematrix-like spinoffs.

What do you think storage customers are most concerned with now?
Michael Tso: I think there is a lot, but I’ll just concentrate on two things here. One is that they continue to just need lower-cost, easier to manage and highly scalable solutions. That’s why people are shifting to cloud and looking at either public or hybrid/private.

Related to that point is I think we’re seeing a Cloud 2.0, where a lot of companies now realize the public cloud is not the be-all, end-all and it’s not going to solve all their problems. They look at a combination of cloud-native technologies and use the different tools available wisely.

I think there’s the broad brush of people needing scalable solutions and lower costs — and that will probably always be there — but the undertone is people getting smarter about private and hybrid.

Point number two is around data protection. We’re now seeing more and more customers worried about ransomware. They’re keeping backups for longer and longer and there is a strong need for write-once compliant storage. They want to be assured that any ransomware that is attacking the system cannot go back in time and mess up the data that was stored from before.

Cloudian actually invested very heavily in building write-once compliant technologies, primarily for financial and the military market because that was where we were seeing it first. Now it’s become a feature that almost everyone we talked to that is doing data protection is asking for.

People are getting smarter about hybrid and multi-cloud, but what’s the next big hurdle to implementing it?

Tso: I think as people are now thinking about a post-cloud world, one of the problems that large enterprises are coming up with is data migration. It’s not easy to add another cloud when you’re fully in one. I think if there’s any kind of innovation in being able to off-load a lot of data between clouds, that will really free up that marketplace and allow it to be more efficient and fluid.

Right now, cloud is a bunch of silos. Whatever data people have stored in cloud one is kind of what they’re stuck with, because it will take them a lot of money to move data out to cloud two, and it’s going to take them years. So, they’re kind of building strategies around that as opposed to really, truly being flexible in terms of where they keep data.

What are you seeing on the edge?

Tso: We’re continuing to see more and more data being created at the edge, and more and more use cases of the data needing to be stored close to the edge because it’s just too big to move. One classic use case is IoT. Sensors, cameras — that sort of stuff. We already have a number of large customers in the area and we’re continuing to grow in that area.

The edge can mean a lot of different things. Unfortunately, a lot of people are starting to hijack that word and make it mean whatever they want it to mean. But what we see is just more and more data popping up in all kinds of locations, with the need of having low-cost, scalable and hybrid-capable storage.

We’re working on getting a ruggedized, easy-to-deploy cloud storage solution. What we learned from Edgematrix was that there’s a lot of value to having a ruggedized edge AI device. But the unit we’re working on is going to be more like a shipping container or a truck as opposed to a little box like with Edgematrix.

What customers would need a mobile cloud storage device like you just described?

Tso: There are two distinct use cases here. One is that you want a cloud on the go, meaning it is self-contained. It means if the rest of the infrastructure around you has been destroyed, or your internet connectivity has been destroyed, you are still able to do everything you could do with the cloud. The intention is a completely isolatable cloud.

In the military application, it’s very straightforward. You always want to make sure that if the enemy is attacking your communication lines and shooting down satellites, wherever you are in the field, you need to have the same capability that you have during peak time.

But the civilian market, especially in global disaster, is another area that we are seeing demand. It’s state and local governments asking for it. In the event of a major disaster, oftentimes for a period, they don’t have any access to the internet. So the idea is to run in a cloud in a ruggedized unit that is completely stand-alone until connectivity is restored.

AI-focused Edgematrix started as a Cloudian idea. What does AI have to do with object storage?
Tso: AI is an infinite data consumer. Improvements on AI accuracy is a log scale — it’s an exponential scale in terms of the amount of data that you need for the additional improvements in accuracy. So, a lot of the reasons why people are accumulating all this data is to run their AI tools and run AI analysis. It’s part of the reason why people are keeping all their data.

Being S3 object store compatible is a really big deal because that allows us to plug into all of the modern AI workloads. They’re all built on top of cloud-native infrastructure, and what Cloudian provides is the ability to run those workloads wherever the data happens to be stored, and not have to move the data to another location.

Are you planning other Edgematrix-like spinoffs?
Tso: Not in the immediate future. We’re extremely pleased with the way Edgematrix worked out, and we certainly are open to do more of this kind of spin off.

We’re not a small company anymore, and one of the hardest things for startups in our growth stage is balancing creativity and innovation with growing the core business. We seem to have found a good sort of balance, but it’s not something that we want to do in volume because it’s a lot of work.

Go to Original Article
Author: