Tag Archives: hyperconverged

VMware vSAN HCI: Complete stack or ‘vaporware’?

Days after VMware’s CEO proclaimed his vSAN product the winner in the hyper-converged infrastructure space, the CEO of VMWare rival Nutanix countered that VMware “sells a lot of vaporware.”

“We’re crushing Nu … I mean we’re winning in the marketplace,” VMware CEO Pat Gelsinger said during his opening VMworld keynote last week. “We’re separating from No. 2. We’re winning in the space.”

Two days later on Nutanix’s earnings call, CEO Dheeraj Pandey took a shot at VMware without mentioning the company by name. “We don’t sell vaporware,” he said, when referring to why Nutanix wins in competitive deals.

In an exclusive interview after the call, Pandey admitted the vaporware charge was aimed mostly at VMware’s vSAN HCI software.

Pat Gelsinger, VMware CEOPat Gelsinger

“VMware sells a lot of vaporware,” Pandey said. “A lot of that vaporware becomes evident to customers who buy that stuff. When bundled products don’t deliver on their promise, they call us. What we sell is not shelfware.”

Whatever VMware is selling with its vSAN HCI software, it is working. VMware reported license bookings of its vSAN HCI software grew 45% year-over-year last quarter, while Nutanix revenue and bookings slipped from last year. VMware’s parent Dell also claimed a 77% increase in orders of its Dell EMC VxRail HCI appliances that run vSAN software. Those numbers suggest Dell increased market share against Nutanix, even if Nutanix did better than expected last quarter following a disappointing period. IDC listed VMware as the HCI software market leader and Dell as the hardware HCI leader in the first quarter of 2019, with Nutanix second in both categories. Gartner lists Nutanix as the HCI software leader, but No. 2 VMware made up ground in Gartner’s first-quarter numbers.

Nutanix’s Pandey attributed at least some of VMware’s HCI success to bundling its vSAN software with its overall virtualization stack. Like VMware, Nutanix has its own hypervisor (AHV) and its share of hardware partners — including Dell — but VMware has a huge vSphere installed base to sell vSAN into.

Dheeraj Pandey, Nutanix CEODheeraj Pandey

Pandey said he was unimpressed by VMware’s Kubernetes and open source plans laid out at VMworld, which included Tanzu and Project Pacific. Both are still roadmap items but reflect a commitment from VMware to containers and open source software.

“That’s worse than vaporware, that’s slideware,” Pandey said of VMware’s announcements. “Everything works in slides. We’re based on Linux; we get a lot of leverage out of open source. AHV was based on Linux, and we’ve made it enterprise grade.”

Making vSAN part of its vSphere virtualization platform has paid off for VMware. Customers at VMworld pointed to their familiarity with VMware and vSAN’s integration with vSphere, and its NSX software-defined networking as reasons for going with vSAN HCI.

 “What really end up selling it for us was, we were already using VMware for our base product and the vast majority of the deliverables that our customers request is in vSphere,” said Lester Shisler, senior IT systems engineer at Harmony Healthcare IT, based in South Bend, Ind. “So whatever pain points we learned along the way with vSAN, we were going to have to learn [with a competing HCI product] as well, along with new software and new management and everything else.”

Matthew Douglas, chief enterprise architect at Sentara Healthcare in Norfolk, Va., said Nutanix was among the HCI options he looked at before picking vSAN.

“VMware was ultimately the choice,” he said. “All the others were missing some components. VMWare was a consistent platform for hyper-converged infrastructure. Plus, there was NSX and all these things that fit together in a nice, uniform fashion. And as an enterprise, I couldn’t make a choice of all these independent different tools. Having one consistent tool was the differentiator.”

Despite losing share, Nutanix’s last-quarter results were mixed. Its revenue of $300 million and billings of $372 million were both down from last year but better than expected following the disappointing previous quarter. Nutanix’s software and support revenue of $287 million increased 7%, a good sign for the HCI pioneer’s move to a software-centric business model. Nutanix also reported a 16% growth in deals over $1 million from the previous quarter.

However, operating expenses also increased. Sales and marketing spend jumped to $254 million from $183 million the previous year. Nutanix, which has never recorded a profit, lost $194 million in the quarter — more than double its losses from a year ago. It finished the quarter with $909 million in cash, down from $943 million last year.

Pandey said he is more concerned about growth and customer acquisition than profitability.

“Profitability is a nuanced word,” Pandey said. “We defer so much in our balance sheet. Right now we care about doing right by the customer when we sell them subscriptions.”

Go to Original Article
Author:

DataCore adds new HCI, analytics, subscription price options

Storage virtualization pioneer DataCore Software revamped its strategy with a new hyper-converged infrastructure appliance, cloud-based predictive analytics service and subscription-based licensing option.

DataCore launched the new offerings this week as part of an expansive DataCore One software-defined storage (SDS) vision that spans primary, secondary, backup and archival storage across data center, cloud and edge sites.

For the last two decades, customers have largely relied on authorized partners and OEMs, such as Lenovo and Western Digital, to buy the hardware to run their DataCore storage software. But next Monday, they’ll find new 1U and 2U DataCore-branded HCI-Flex appliance options that bundle DataCore software and VMware vSphere or Microsoft Hyper-V virtualization technology on Dell EMC hardware. Pricing starts at $21,494 for a 1U box, with 3 TB of usable SSD capacity.

The HCI-Flex appliance reflects “the new thinking of the new DataCore,” said Gerardo Dada, who joined the company last year as chief marketing officer.

DataCore software can pool and manage internal storage, as well as external storage systems from other manufacturers. Standard features include parallel I/O to accelerate performance, automated data tiering, synchronous and asynchronous replication, and thin provisioning.

New DataCore SDS brand

In April 2018, DataCore unified and rebranded its flagship SANsymphony software-defined storage and Hyperconverged Virtual SAN software as DataCore SDS. Although the company’s website continues to feature the original product names, DataCore will gradually transition to the new name, said Augie Gonzalez, director of product marketing at DataCore, based in Fort Lauderdale, Fla.

With the product rebranding, DataCore also switched to simpler per-terabyte pricing instead of charging customers based on a-la-carte features, nodes with capacity limits and separate expansion capacity. With this week’s strategic relaunch, DataCore is adding the option of subscription-based pricing.

Just as DataCore faced competitive pressure to add predictive analytics, the company also needed to provide a subscription option, because many other vendors offer it, said Randy Kerns, a senior strategist at Evaluator Group, based in Boulder, Colo. Kerns said consumption-based pricing has become a requirement for storage vendors competing against the public cloud.

“And it’s good for customers. It certainly is a rescue, if you will, for an IT operation where capital is difficult to come by,” Kerns said, noting that capital expense approvals are becoming a bigger issue at many organizations. He added that human nature also comes into play. “If it’s easier for them to get the approvals with an operational expense than having to go through a large justification process, they’ll go with the path of least resistance,” he said.

DataCore SDS
DataCore software-defined storage dashboard

DataCore Insight Services

DataCore SDS subscribers will gain access to the new Microsoft Azure-hosted DataCore Insight Services. DIS uses telemetry-based data the vendor has collected from thousands of SANsymphony installations to detect problems, determine best-practice recommendations and plan capacity. The vendor claimed it has more than 10,000 customers.

Like many storage vendors, DataCore will use machine learning and artificial intelligence to analyze the data and help customers to proactively correct issues before they happen. Subscribers will be able to access the information through a cloud-based user interface that is paired with a local web-based DataCore SDS management console to provide resolution steps, according to Steven Hunt, a director of product management at the company.

DataCore HCI-Flex appliance
New DataCore HCI-Flex appliance model on Dell hardware

DataCore customers with perpetual licenses will not have access to DIS. But, for a limited time, the vendor plans to offer a program for them to activate new subscription licenses. Gonzalez said DataCore would apply the annual maintenance and support fees on their perpetual licenses to the corresponding DataCore SDS subscription, so there would be no additional cost. He said the program will run at least through the end of 2019.

Shifting to subscription-based pricing to gain access to DIS could cost a customer more money than perpetual licenses in the long run.

“But this is a service that is cloud-hosted, so it’s difficult from a business perspective to offer it to someone who has a perpetual license,” Dada said.

Johnathan Kendrick, director of business development at DataCore channel partner Universal Systems, said his customers who were briefed on DIS have asked what they need to do to access the services. He said he expects even current customers will want to move to a subscription model to get DIS.

“If you’re an enterprise organization and your data is important, going down for any amount of time will cost your company a lot of money. To be able to see [potential issues] before they happen and have a chance to fix that is a big deal,” he said.

Customers have the option of three DataCore SDS editions: enterprise (EN) for the highest performance and richest feature set, standard (ST) for midrange deployments, and large-scale (LS) for secondary “cheap and deep” storage, Gonzalez said.

Price comparison

Pricing is $416 per terabyte for a one-year subscription of the ST option, with support and software updates. The cost for a perpetual ST license is $833 per terabyte, inclusive of one year of support and software updates. The subsequent annual support and maintenance fees are 20%, or $166 per year, Gonzalez said. He added that loyalty discounts are available.

The new PSP 9 DataCore SDS update that will become generally available in mid-July includes new features, such as AES 256-bit data-at-rest encryption that can be used across pools of storage arrays, support for VMware’s Virtual Volumes 2.0 technology and UI improvements.

DataCore plans another 2019 product update that will include enhanced file access and object storage options, Gonzalez said.

This week’s DataCore One strategic launch comes 15 months after Dave Zabrowski replaced founder George Teixeira as CEO. Teixeira remains with DataCore as chairman.

“They’re serious about pushing toward the future, with the new CEO, new brand, new pricing model and this push to fulfill more of the software-defined stack down the road, adding more long-term archive type storage,” Jeff Kato, a senior analyst at Taneja Group in West Dennis, Mass., said of DataCore. “They could have just hunkered down and stayed where they were at and rested on their installed base. But the fact that they’ve modernized and gone for the future vision means that they want to take a shot at it.

“This was necessary for them,” Kato said. “All the major vendors now have their own software-defined storage stacks, and they have a lot of competition.”

Go to Original Article
Author:

VMworld pushes vSAN HCI to cloud, edge

VMware executives predict the vSAN hyper-converged software platform will grow rapidly into a key building block for the vendor’s strategy to conquer the cloud and other areas outside the data center.

VMware spent a lot of time discussing the roadmap for its vSAN hyper-converged infrastructure (HCI) software roadmap at VMworld 2018 last month. The vSAN news included short-term specifics with the launch of a private beta program for the next version, along with more general overarching plans for the future.

VMware executives made it clear that vSAN HCI will play a big role in its long-term cloud strategy. They painted HCI as a technology spanning from the data center to the cloud to the edge, as it brings storage, compute and other resources together into a single platform.

The vSAN HCI software is built into VMware’s vSphere hypervisor, and is sold as part of integrated appliances such as Dell EMC VxRail and as Ready Node bundles with servers. VMware claims more than 14,000 vSAN customers, and IDC lists it as the revenue leader among HCI software.

VMware opened its private beta program for vSAN 6.7.1 during VMworld, adding file and native cloud storage and data protection features.

VSAN HCI: From DC to cloud to edge

During his opening day keynote at VMworld, VMware CEO Pat Gelsinger called vSAN “the engine that’s just been moving rapidly to take over the entire integration of compute and storage to expand to other areas.”

Where is HCI moving to? Just about everywhere, according to VMware executives. That includes Project Dimension, a planned hardware as a service designed to bring VMware SDDC infrastructure on premises.

“The definition of HCI has been expanding,” said Yanbing Li, VMware senior vice president and general manager of storage and availability. “We started with a simple mission of converging compute and storage by putting both on a software-defined platform running on standard servers. This is where a lot of our customer adoption has happened. But the definition of HCI is expanding up through the stack, across to the cloud and it’s supporting a wide variety of applications.”

VSAN beta: Snapshots, native cloud storage

The vSAN 6.7.1 beta includes policy-based native snapshots for data protection, NFS file services and support for persistent storage for containers. VMware also added the ability for vSAN to manage Amazon Elastic Block Storage (EBS) in AWS, a capacity reclamation feature and a Quickstart guided cluster creation wizard.

If it pans out as we hope, it will be data center as a service.
Chris GreggCIO, Mercy Ships

Lee Caswell, VMware vice president of products for storage and availability, said vSAN can now take point-in-time snapshots across a cluster. The snapshot capability is managed through VMware’s vCenter. There is no native vSAN replication yet, however. Replication still requires vSphere Replication.

Caswell said the file services include a clustered namespace, allowing users to move files to VMware Cloud on AWS and back without requiring separate mount points for each node.

The ability to manage elastic capacity in AWS allows customers to scale storage and compute independently,

“This is our first foray into storage-only scaling,” Caswell said.

The automatic capacity redemption will reclaim unused capacity on expensive solid-state drive storage.

Caswell said there was no timetable for when the features will make it into a general availability version of vSAN.

Mercy Ships was among the customers at VMworld expanding their vSAN HCI adoption. Mercy Ships uses Dell EMC VxRail appliances running vSAN in its Texas data center and is adding VxRail on two hospital ships that bring volunteer medical teams to underdeveloped areas. They include the current Africa Mercy floating hospital and a second ship under construction.

“The data center for us needs to be simple, straightforward, scalable and supportable,” Mercy Ships CIO Chris Gregg said. “That’s the dream we’re seeing through hyper-converged infrastructure. If it pans out as we hope, it will be data center as a service. Then, as an IT department we can focus on things that are really important to the organization. For us, that means serving more patients.”

VSAN hyper-converged users offer buying, implementing advice

LAS VEGAS — Today, VMware paints vSAN hyper-converged technology as a key piece of IT everywhere, from the data center to the cloud to the edge. But early vSAN customers remember when it was still a nascent concept and not fully proven.

As a customer panel at VMworld 2018, vSAN hyper-converged software users offered advice for buying and implementing what, in some cases, was still a suspect technology when they adopted it. The customers were split between using vSAN on integrated appliances, such as Dell EMC VxRail hardware, or buying on servers as vSAN Ready Nodes. Either way, they faced similar buying decisions and implementation challenges.

Here is some of the advice offered for going down the road of vSAN hyper-converged and hyper-converged infrastructure (HCI) in general.

Start small and prove its value

Several of the vSAN hyper-converged customers said it was difficult to gain support originally for moving from a traditional three-tier architecture to HCI. It helped to start with a specific use case to prove the technology and then grow from there.

William Dufrin, IT manager of client virtualization engineering and architecture at General Motors, said the early case was virtual desktop infrastructure (VDI).

“In our environment, change is kind of rough,” Dufrin said. “We’re a large organization, and it could be difficult to make changes like vSAN instead of traditional storage.”

He said IT developers started using vSAN for VDI in 2014, “and in four years, we’ve seen a huge adoption rate inside the organization because of the values and the savings. It’s been stable, and performance has been phenomenal.”

Dufrin said General Motors now has around 10,000 virtual desktops running on a six-node cluster, with two fault domains for availability.

Mark Fournier, systems architect for the United States Senate Federal Credit Union in Alexandria, Va., said his bank started with vSAN Ready Nodes in remote branches. The HCI implementation came around the time USSFCU began virtualizing in 2014.

“Going to vSAN was a challenge against some of the traditional technology we had,” Fournier said. “Even though we were virtualizing, we were still siloing off storage, compute and networking. To get into what seems to be the future, we upgraded our branches using vSAN Remote Office Branch Office licensing. That allowed us to implement hyper-converged architecture in our branches for a lot less money than we expected.”

Fournier said the credit union put Ready Nodes on all-flash blade servers in three branches. He said a four-node all-flash implementation in one branch is so fast now that some of his organization’s developers want to move workloads to the branch.

“With the new PowerEdge M7000 from Dell, options for onboard storage are more flexible, and [it] allows us to bring vSAN out of the branches and into the data center now that management sees the benefit we get out of it,” Fournier said.

Think platform and relationships, and consider all options

The panelists said they did a lot of research before switching to HCI and picking a vendor. They evaluated products from leading HCI vendors, different offerings from the same vendor and compared HCI to traditional IT before making buying decisions.

Mariusz Nowak, director of infrastructure services at Oakland University in Rochester, Mich., said cost played a large role — as is often the case with educational institutions.

“I was sick and tired of replacing entirely every traditional storage array every few years and begging for new money, hundreds of thousands of dollars,” he said. “My boss, and everyone else, wasn’t happy to have to spend tons of money.”

Oakland University was a VMware customer since 2005, and Nowak said he looked at early versions of vSAN hyper-converged software, but felt it wasn’t ready for the university. After VMware added more enterprise features, such as stretch clusters, deduplication and encryption, Oakland installed the HCI software in 2017. It now has 12 vSAN hosts, with 400 guest virtual machines and 350 TB of storage on vSAN Ready Nodes running on Dell EMC PowerEdge servers.

I was sick and tired of replacing entirely every traditional storage array every few years and begging for new money, hundreds of thousands of dollars.
Mariusz Nowakdirector of infrastructure services, Oakland University

“I choose Ready Nodes so I don’t have extra overhead,” Nowak said. “With VxRail, you have to pay more. With Ready Nodes, I can modify my hardware whenever I need, whether I need more capacity or more CPUs. Some HCI vendors will say, ‘This is the cookie-cutter node that you have to buy.’ We have more flexibility.”

Alex Rodriguez, VDI engineer at Rent-A-Center, based in Plano, Texas, said his company did a proof of concept (POC) with Dell EMC VxRail, Nutanix and SimpliVity — since acquired by Hewlett Packard Enterprise — when evaluating HCI in 2016. He said price and vendor relationships also figured in the final decision.

“When we did a POC, Nutanix won out,” he said. “But we saw a cost benefit with VxRail, and we decided to go in that direction because of our relationship with VMware. And each generation of this [vSAN] software has gotten a whole lot better. Performance is better and manageability is easy. You may find an application that’s better for one stack or another, but overall we think VxRail is a better platform.”

Divide and cluster

Several of the panelists suggested using clusters or stretch clusters with vSAN hyper-converged infrastructure to help separate workloads and provide availability.

Nowak said Oakland University installed 10 nodes in a stretched cluster across two campus data centers, with 10 Gigabit Ethernet uplinks to a witness site connecting them.

“For little cost, I have an active-active data center solution,” he said. “If we lost one data center, I could run almost my entire workload on another site, with no disruption. I can technically lose one site and shift my workload to another site.”

Rent-A-Center’s Rodriguez set up a four-node cluster with management applications and a 12-node cluster for VDI and other applications after installing Dell EMC VxRail appliances in 2016.

“We wanted to make sure we could manage our environment,” he said. “If we would’ve consolidated the management stack with the VDI stack and something happened, we would’ve lost control. Having segmentation gave us control.”

Dell EMC HCI and storage cloud plans on display at VMworld

LAS VEGAS — Dell EMC launched cloud-related enhancements to its storage and hyper-converged infrastructure products today at the start of VMworld 2018.

The Dell EMC HCI and storage product launch includes a new VxRail hyper-converged appliance, which uses VMware vSAN software. The vendor also added a cloud version of the Unity midrange unified storage array and cloud enhancements to the Data Domain data deduplication platform.

Dell EMC HCI key for multi-cloud approach?

Dell EMC is also promising synchronized releases between the VxRail and the VMware vSAN software that turns the PowerEdge into an HCI system – although it could take 30 days for the “synchronization.” Still, that’s an improvement over the six months or so it now takes for the latest vSAN release to make it to VxRail.

Whether you’re protecting data or storing data, the learning curve of your operating model — regardless of whether you’re on premises or off premises — should be zero.
Sam Grocottsenior vice president of marketing, ISG, Dell EMC

Like other vendors, Dell EMC considers its HCI a key building block for private and hybrid clouds. The ability to offer private clouds with public cloud functionality is becoming an underpinning of the multi-cloud strategies at some organizations.

Sam Grocott, senior vice president of marketing for the Dell EMC infrastructure solutions group, said the strong multi-cloud flavor of the VMworld product launches reflects conversations the vendor has with its customers.

“As we talk to customers, the conversation quickly turns to what we are doing in the cloud,” Grocott said. “Customers talk about how they’re evaluating multiple cloud vendors. The reality is, they aren’t just picking one cloud, they’re picking two or even three clouds in a lot of cases. Not all your eggs will be in one basket.”

Dell EMC isn’t the only storage vendor making its storage more cloud-friendly. Its main storage rival NetApp also offers its unified primary storage and backup options that run in the cloud, and many startups focus on cloud compatibility and multi-cloud management from the start.

Grocott said Dell’s overall multi-cloud strategy is to provide a consistent operating model experience on premises, as well as in private and public clouds. That strategy covers Dell EMC and VMware products. Dell EMC VxRail is among the products that tightly integrates VMware with the vendor’s storage.

“That’s what we think is going to differentiate us from any of the competition out there,” he said. “Whether you’re protecting data or storing data, the learning curve of your operating model — regardless of whether you’re on premises or off premises — should be zero.”

Stu Miniman, a principal analyst at IT research firm Wikibon, said Dell EMC is moving toward what Wikibon calls a True Private Cloud.

Wikibon’s 2018 True Private Cloud report predicts almost all enterprise IT will move to a hybrid cloud model dominated by SaaS and true private cloud. Wikibon defines true private cloud as completely integrating all aspects of a public cloud, including a single point of contact for purchase, support, maintenance and upgrades.

“The new version of the private cloud is, let’s start with the operating model I have in the public cloud, and that’s how I should be able to consume it, bill it and manage it,” Miniman said. “It’s about the software, it’s about the usability, it’s about the management layer. Step one is to modernize the platform; step two is to modernize the apps. It’s taken a couple of years to move along that spectrum.”

HiveIO seeks to create buzz in HCI market

Newcomer HiveIO Inc. is trying to make it in the already crowded hyper-converged infrastructure market by touting a software-only application that it claims uses AI for resource management.

HiveIO this week released Hive Fabric 7.0, its hyper-converged application. The vendor, based in Hoboken, N.J., has actually been around since 2015 and shipped its first version of Hive Fabric that same year, but has kept a low profile until now. HiveIO’s co-founders Kevin McNamara and Ofer Bezalel came out of JP Morgan Chase’s engineering team. HiveIO CTO McNamara said the goal was to create an infrastructure that consisted of one platform, was simple to use and was inexpensive.

“They thought about a single product, single vendor, hyper-converged fabric out of the box that just deploys and just works and reduces the complexity of the data center,” said HiveIO CEO Dan Newton, who joined HiveIO last April from Rackspace. “Our team comes from an operational background, and we’re focused on making our product operationally very easy, yet very stable. We try to make the technology work for the customers. We don’t want the customers to have to work to make it work.”

Newton said HiveIO has about 400 customers, including those it picked up by acquiring the assets of HCI software vendor Atlantis Computing in July 2017. HiveIO also inherited Atlantis’ OEM deal with Lenovo, which packaged Atlantis’ HCI software on its servers. However, HiveIO has no other hardware partnerships for Hive Fabric.

Newton said the goal is to provide HCI software that can deploy in 20 minutes on three nodes and requires little training to use.

We put the Message Bus into appliances and use machine learning to manage the appliances.
Kevin McNamaraCTO, HiveIO

HiveIO describes Hive Fabric as a “zero-layer, hardware-agnostic” hyper-converged platform that runs on any x86 server or in the cloud. Hive Fabric includes a free kernel-based virtual machine hypervisor, although it can also run with VMware and Microsoft hypervisors. Hive Fabric manages storage, compute, virtualization and networking across HCI clusters through its Message Bus. It includes a REST API and Universal Rest Interface to support third-party and customer applications.

McNamara called the artificial intelligence-driven Hive Fabric Message Bus “unique to the industry.” he said the Message Bus relies on AI and metadata to format data in real-time and provide predictive analytics to prevent potential performance and capacity problems.

“It’s all integrated into the stack,” McNamara said. “We can see everything in the hardware, everything in the stack, everything in the guest server and everything in the application layers. We put the Message Bus into appliances and use machine learning to manage the appliances. You can move workloads across appliances.”

Newton added, “Every piece of data point all comes through the Message Bus.”

HiveIO released Hive Fabric 7.0 this week, simplifying resource management through a Cluster Resource Scheduler (CRS). The CRS uses AI to monitor resource allocation across the cluster, and moves guest virtual machines between servers to improve operational efficiency. Hive Fabric 7.0 also allows customers to run multiple mixed-application workloads.

Hive Fabric 7 from HiveIO
HiveIO’s Hive Fabric 7 management dashboard.

Forrester Research senior analyst Naveen Chhabra said HiveIO will need to prove its AI capabilities to make it in an HCI field that includes at least 15 vendors.

“A number of companies already have proven technology — including Nutanix, Cisco, Dell EMC, VMWare,” Chhabra said. “HiveIO can do the same, but they must deliver at least table stakes technology, and then find out what innovations they can come up with. They talk about the interconnect fabric with artificial intelligence. It’s a transport layer for sending bits and bytes from one node to another. What kind of artificial intelligence does it have? Is it artificial intelligence or just AI washing like you hear from other vendors? And they have to find a strong use case for that artificial intelligence, even if it’s just one use case.”

HiveIO executives claim their early customers’ workloads include general server virtualization, virtual desktops, databases, log analysis and test/dev.

Hive Fabric is sold as a monthly subscription based on the number of physical servers with no restrictions on memory, storage or cores.

HiveIO promises to support Atlantis Computing hyper-converged and virtual desktop infrastructure software through 2022. Newton said HiveIO will offer Atlantis customers an upgrade path to Hive Fabric. He said HiveIO hired some Atlantis employees but is not using its technology in Hive Fabric.

HiveIO has 30 employees in the U.S. and U.K. It has completed two funding rounds and lists El Dorado Ventures, Rally Ventures, Osage Venture Partners and Citrix as investors but does not disclose its total funding.

HPE aims new SimpliVity HCI at edge computing

Hewlett Packard Enterprise has introduced a compact hyper-converged infrastructure system destined for running IoT applications at the network’s edge.

HPE unveiled the SimpliVity 2600 this week, calling the device the “first software-optimized offering” in the SimpliVity HCI line. The 2U system is initially built to run a virtual desktop system, but its size and computing power makes it “ideal for edge computing applications,” said Lee Doyle, the principal analyst at Doyle Research, based in Wellesley, Mass.

Thomas Goepel, the director of product management at HPE, said the company would eventually market the SimpliVity 2600 for IoT and general-purpose applications that require a smallish system with a dense virtualized environment.

As virtual desktop infrastructure, the SimpliVity 2600 provides a scale-out architecture that lets companies increase compute, memory and storage as needed. The system also has a built-in backup and disaster recovery for desktop operations.

Intel Xeon processors with 22 cores each power the SimpliVity 2600, which supports up to 768 GB of memory. Hardware features include a redundant power supply, hot-pluggable solid-state drives, cluster expansion without downtime and an integrated storage controller with a battery-backed cache. The system also has a 10 GbE network interface card.

HPE’s planned Plexxi integration

HPE’s SimpliVity HCI portfolio stems from last year’s $650 million purchase of HCI vendor SimpliVity Corp. The acquired company’s technology for data deduplication and compression was a significant attraction for HPE, analysts said.

HPE has said it will eventually incorporate in its SimpliVity HCI systems the hyper-converged networking (HCN) technology of Plexxi. HPE announced its acquisition of Plexxi in May but did not disclose financial details.

“[An] HPE SimpliVity with Plexxi solution is on the roadmap,” Goepel said. He did not provide a timetable.

Plexxi’s HCN software enables a software-based networking fabric that runs on Broadcom-powered white box switches. Companies can use VMware’s vCenter dashboard to orchestrate virtual machines in a Plexxi HCI system. Plexxi software can also detect and monitor VMware NSX components attached to the fabric.

Cisco hyper-converged HyperFlex adds NVMe-enabled model

Cisco is bumping up the performance of its HyperFlex hyper-converged infrastructure platform with nonvolatile memory express flash storage.

The networking specialist in July plans to broaden its hyper-converged infrastructure (HCI) options with Cisco HyperFlex All NVMe. The new Cisco hyper-converged system is an NVMe-enabled 1U HX220c M5 Unified Computing System (UCS) server that’s integrated with dual Intel Xeon Skylake processors, Nvidia GPUs and Intel Optane DC SSDs.

The HX C220 uses Intel Optane drives on the front end for caching. Four Intel 3D NAND NVMe SSDs of 8 TB each provide 32 TB of raw storage capacity per node. Optane SSDs are designed on Intel 3D XPoint memory technology.

HyperFlex 3.5 software extends Cisco Intersight compute and network analytics to storage. Version 3.5 supports virtual desktop infrastructure with Citrix Cloud services and hyper-convergence for SAP database applications.

“This is clearly a performance play for Cisco, with the addition of NVMe and support for Nvidia GPUs,” said Eric Slack, a senior analyst at Evaluator Group, a storage and IT research firm in Boulder, Colo. “They’re talking about SAP modernization. Cisco is going to try and sell hyper-converged to a lot of folks, but initially the targeting will be their UCS customer base. And that makes sense.”

Cisco: Hyper-converged use cases are expanding        

More than 2,600 customers have installed HyperFlex, many of whom already are Cisco hyper-converged UCS users, said Eugene Kim, a Cisco HyperFlex product marketing manager. He said customers are “pushing the limits” of HyperFlex for production storage.

“The all-flash HyperFlex we introduced [in 2017] comprises about 60% of our HCI sales. We see a lot of customers running mission-critical applications, and some customers are running 100% on HyperFlex,” Kim said.

Hyper-convergence relies on software-defined storage to eliminate the need for dedicated storage. An HCI system packages all the necessary computing resources — CPUs, networking, storage and virtualization tools — as a single integrated appliance. That’s different from converged infrastructure, in which customers can buy different components by the rack and bundle them together with a software stack.

We see a lot of customers running mission-critical applications, and some customers are running 100% on HyperFlex.
Eugene KimHyperFlex product marketing manager, Cisco

Cisco hyper-converged products were late to market compared with other HCI vendors. Cisco introduced HyperFlex in 2016 in partnership with Springpath, bundling the startup’s log-structured distributed file system with integrated Cisco networking. Cisco was an early investor in Springpath and eventually acquired it in 2017.

Cisco’s HCI market share jumped from 2.5% in the fourth quarter of 2016 to 4.5% in the fourth quarter of 2017, according to IDC. HyperFlex sales generated more than $56 million — a 200% increase year over year. Still, Cisco was in fourth place behind Dell, Nutanix and Hewlett Packard Enterprise in HCI hardware share, according to IDC.

As part of its partnership with Intel, Cisco added Intel Volume Management Device to HyperFlex 3.5. Intel VMD allows NVMe devices to be swapped out of the PCIe bus, avoiding a system shutdown.

Much of the heavy lifting for Cisco hyper-converged infrastructure was handled with the HyperFlex 3.0 release in January. It added Microsoft Hyper-V to support in addition to existing support for VMware hypervisors and the Cisco Container volume driver to launch persistent storage containers with Kubernetes.

Owning the compute, network and storage software gives Cisco hyper-converged systems an advantage over traditional hardware-software HCI bundles, said Vikas Ratna, a product manager at Cisco.

“We believe being able to optimize the stack up and down provides the best on-ramp for customers [to adopt HCI]. We don’t have to overengineer, as we would if we just owned the software layer,” Ratna said.

Customers can scale Cisco HyperFlex to 64 nodes per cluster. Ratna said Cisco plans to release a 2U HyperFlex that scales to 64 TB of raw storage per node when larger NVMe SSDs are generally available.

Cisco HyperFlex system upgrade targets hybrid cloud

Cisco has added to its hyper-converged infrastructure platform tools for running and managing hybrid applications split between public and private clouds. The latest technology in the Cisco HyperFlex system makes it a stronger competitor in the market, analysts said.

Cisco introduced this week the 3.0 software release for HyperFlex. The announcement came a day after Cisco said it would acquire Skyport Systems Inc., a maker of highly secure, cloud-managed, hyper-converged systems.

In general, HyperFlex combines software-defined storage and data services software with Cisco Unified Computing System. UCS integrates computing, networking and storage resources to provide efficiency and centralized management.

The latest release packs a lot more Cisco software into HyperFlex, which should improve interoperability and simplify support, said Dan Conde, an analyst at Enterprise Strategy Group Inc., based in Milford, Mass. “Cisco has taken many of the assets that used to be separate in their stable and made it available under a single [HyperFlex] umbrella.”

The new features should also make HyperFlex more competitive and useful as a hybrid cloud platform, analysts said. In the hyper-converged infrastructure (HCI) market, Cisco has lagged behind rivals Dell, Hewlett Packard Enterprise and Nutanix.

Software added to the Cisco HyperFlex system

HyperFlex customers now have the option of Cisco AppDynamics integration for monitoring performance of applications running on HyperFlex and across clouds. Other cloud-related management software available for the HCI system include Cisco Workload Optimization Manager (CWOM) and CloudCenter.

CWOM helps IT staff determine the resource needs of workloads. CloudCenter provides application-centric orchestration.

Other new features include support for Microsoft’s Hyper-V virtual machine (VM). HyperFlex supports the more popular VMware ESXi, but Hyper-V is often used to run Microsoft applications.

Release 3 of the Cisco HyperFlex system also contains support for Kubernetes-managed containers, making HyperFlex friendlier to developers building cloud-native applications.

Along with cloud apps, companies can run more enterprise applications on HyperFlex. Cisco released validated designs and guides for running Oracle, SAP, Microsoft and Splunk software.

The most prominent use case for HCI systems is running business applications on a general computing platform, according to Nemertes Research, based in Mokena, Ill. Roughly 30% of enterprises use HCI for general computing, followed by private cloud at 19%.

Increased scalability in the Cisco HyperFlex system

Cisco has increased the scalability of HyperFlex. Customers can raise VM density by joining HyperFlex systems into clusters, which can now contain up to 64 nodes. The previous maximum was eight.

Cisco has also added support for stretched clusters, which makes it possible to have nodes span multiple geographical locations.

Overall, analysts expect the new features to help Cisco add to the more than 2,500 companies using HyperFlex today.

“This announcement, combined with the market still being ripe for adoption, is a great combo going forward,” said Mike Leone, an analyst at Enterprise Strategy Group. “It will be interesting to see how the customer base grows now that they’re on a more level playing field with the competition.”

Plans for Skyport acquisition

The Skyport acquisition brings a tightly knit hardware and software product to Cisco’s portfolio. The system is primarily used to run business-critical data center applications.

“I think Cisco’s goal is to get the automated, security-wrapped provisioning software [in Skyport] and just fold it into their cloud and infrastructure management tools broadly,” said Nemertes analyst John Burke.

That may be so, but for now, Cisco has provided no details, saying in a statement it plans to use Skyport’s “intellectual property, seasoned software and network expertise to accelerate priority areas across multiple Cisco portfolios.”

The Skyport team will join Cisco’s networking group, led by general manager Jonathan Davidson, and the data center and computing systems product group, headed by general manager Liz Centoni. Cisco did not disclose financial terms.

Hyper-converged infrastructure disperses for edge computing

Having established a foothold inside many corporate data centers, hyper-converged infrastructure is poised to extend its reach into the world of edge computing, although precious few IT shops have fully formed edge computing strategies.

The explosion of IoT technologies to collect, analyze and stream rivers of data to central data systems has pulled edge computing to the center of many IT shops’ radar. This will be no casual encounter, but rather a cosmic collision over the next five years. Gartner predicts that 40% of all enterprises will have a full-blown edge computing strategy in place by 2021, up from less than 1% as of late 2017.

“Out on the edge … is where the physical will meet the digital,” said Dave Russell, vice president and analyst at Gartner, speaking at the company’s annual data center conference earlier this month.

Among the array of IoT devices out on the edge is an influx of hyper-converged infrastructure systems. The vast majority of hyper-converged systems currently reside in central data centers of large corporate users or their service providers, to serve as less expensive cloud on-ramps or supplemental processing power for core servers. But IT shops have begun to rethink hyper-converged infrastructure for edge computing as the technology evolves: more robust hardware, falling prices, and steadily improved capabilities from remote management to simpler installation and configuration.

“People want to move so much faster now. …They want to go to one vendor and just drop [hyper-converged systems] into their environment,” said Jeff Hewitt, a research vice president at Gartner. “[Hyper-converged systems] are small and getting smaller, quick to deploy and easier to manage. People feel more confident about putting them out on the edge.”

Gartner analyst Jeff HewittJeff Hewitt

Hyper-converged infrastructure, or HCI, servers found early appeal in remote offices/branch offices (ROBO), but over the past year hyper-converged infrastructure has been deployed in more edge locations on factory floors, disaster recovery sites, retail stores and warehouses, according to Gartner.

“I haven’t considered them [hyper-converged systems] for the edge, because we’ve been deploying IoT products,” said Todd Hansen, a project manager at a Midwest-based engineering firm that he said deploys sensors to collect and analyze multiple streams of “big data” from field engineering projects passed on to its central data center. “But their smaller form factor might open up possibilities for us in field offices.”

[Hyper-converged systems] are small and getting smaller, quick to deploy and easier to manage. People feel more confident about putting them out on the edge.
Jeff Hewittresearch vice president, Gartner

Over the next year or two the battle over real estate among the many edge computing products figures to intensify. Gartner is currently compiling data about hyper-converged infrastructure for edge computing, with results expected in early 2018. How successful HCI offerings will be out on the edge is uncertain, although Hewitt likes their chances.

“It’s hard to say in these early days, but it is safe to say there will be a lot of [hyper-converged systems] deployed out there,” he said.

Pick your partner for the hyper-converged edge

Enterprises have several options to deploy and support hyper-converged systems in edge computing environments, and the pros and cons for each will be familiar to most IT pros.

Infrastructure providers such as Dell offer the full range of hardware and software technologies and the technical support to help integrate HCI edge systems with systems back in the central data center. The downside is such vendors are hardly agnostic and will strongly push only their products.

Facility specialists tend to be hardware agnostic and offer a wider range of options for modular hyper-converged systems, but some may lack the support organization to help Fortune 500 companies.

Regional providers can be appealing due to their proximity and sometimes a more personal service touch. But size matters with an IT partner, and these providers don’t have enough of it.

“Regional providers are usually close by and they can get to know your needs pretty well,” said one senior engineer with an aeronautics company who attended the session. “But the downside is they don’t have the range of pre- and post-sales support a company like Dell can offer.”

In an informal instapoll during a session at the Gartner conference, the majority of audience responders indicated a preference for infrastructure providers to help install and support hyper-converged systems out on the edge, followed by facilities specialists. Only a small handful indicated they would prefer to work with regional providers.

Ed Scannell is a senior executive editor with TechTarget. Contact him at [email protected].