Tag Archives: nonvolatile

Cisco MDS 9700 switches prep for 64G Fibre Channel NVMe-oF

Gearing up for adoption of non-volatile memory over fabrics, Cisco upgraded its multilayer MDs network switches to help shops transition to the next generation of Fibre Channel block storage.

Cisco will add line cards for the Cisco MDS 9700 family for in-place hardware upgrades and an extension of Cisco SAN Analytics to support the NVMe protocol.

The new Cisco MDS 9700 switching hardware enables data centers to run multiple Fibre Channel (FC) generations in the same chassis. Other new features include Ansible modules that automate deployment of storage tasks for VMware vSAN, device aliases and zoning.

Cisco said it plans to ship 64G line cards for MDS-9706, MDS-9710 and MDS-9718 Director switches by the end of 2019. The new cards are timed in advance of 64 gigabit per second FC, also known as Gen 7 FC. A data center can install the Cisco line card to run 64 Gbps FC concurrently with existing 16-gig and 32-gig traffic.

MDS 9700 switches are part of the Cisco MDS 9000 product line, which consists of large networking devices that centralize the management of storage traffic at the switch level. Cisco MDS 9700 products launched in 2013, around the time NVMe flash media emerged as a contender to SATA-based SSDs.

Cisco follows Brocade

The latest Cisco MDS product update comes nearly 18 months after similar products hit the market by SAN switching rival Brocade, now part of semiconductor giant Broadcom. Broadcom and Cisco are the only large vendors who sell FC network switches and are positioning those devices for NVMe over FC implementations. There are also Ethernet and InfiniBand options for running NVMe over Fabrics (NVMe-oF).

FC technology delivers a high level of lossless performance, while NVMe offers a quantum boost in network latency by routing traffic across PCI Express lanes. The combination is expected to have broad appeal to data centers with applications demanding extreme high performance.

Reengineering the Cisco MDS 9700 required a lot of work to avoid “rip and replace” scenarios, said Scott Sinclair, an analyst for storage at Enterprise Strategy Group, an IT research firm in Milford, Mass.

“There is a big desire to transition storage networks to NVMe, and the Fibre Channel community is making it insanely easy to do. Cisco had to do a lot of hard work to make this transition seamless, and that will help companies save a ton of money over the long haul,” Sinclair said.

Data centers can adapt existing FC technologies for NVMe via a software upgrade. FC has fewer hurdles to NVMe adoption than Ethernet-based remote direct memory access memory technologies, which include RDMA over Converged Ethernet and Internet Wide Area RDMA Protocol. Another NVME fabric option is TCP/IP, a server-native functionality popular with hyper-scale cloud providers.

Enhanced troubleshooting

Onboard telemetry is native to all Cisco MDS 9000 switches. The latest iteration of the software is designed to capture high-fidelity reads of all traffic, including traditional SCSI block messages and data sent via NVMe-oF. The tool allows admins to slide back one hour at a time to pinpoint trouble spots with networks or storage.

Go to Original Article
Author:

Cisco hyper-converged HyperFlex adds NVMe-enabled model

Cisco is bumping up the performance of its HyperFlex hyper-converged infrastructure platform with nonvolatile memory express flash storage.

The networking specialist in July plans to broaden its hyper-converged infrastructure (HCI) options with Cisco HyperFlex All NVMe. The new Cisco hyper-converged system is an NVMe-enabled 1U HX220c M5 Unified Computing System (UCS) server that’s integrated with dual Intel Xeon Skylake processors, Nvidia GPUs and Intel Optane DC SSDs.

The HX C220 uses Intel Optane drives on the front end for caching. Four Intel 3D NAND NVMe SSDs of 8 TB each provide 32 TB of raw storage capacity per node. Optane SSDs are designed on Intel 3D XPoint memory technology.

HyperFlex 3.5 software extends Cisco Intersight compute and network analytics to storage. Version 3.5 supports virtual desktop infrastructure with Citrix Cloud services and hyper-convergence for SAP database applications.

“This is clearly a performance play for Cisco, with the addition of NVMe and support for Nvidia GPUs,” said Eric Slack, a senior analyst at Evaluator Group, a storage and IT research firm in Boulder, Colo. “They’re talking about SAP modernization. Cisco is going to try and sell hyper-converged to a lot of folks, but initially the targeting will be their UCS customer base. And that makes sense.”

Cisco: Hyper-converged use cases are expanding        

More than 2,600 customers have installed HyperFlex, many of whom already are Cisco hyper-converged UCS users, said Eugene Kim, a Cisco HyperFlex product marketing manager. He said customers are “pushing the limits” of HyperFlex for production storage.

“The all-flash HyperFlex we introduced [in 2017] comprises about 60% of our HCI sales. We see a lot of customers running mission-critical applications, and some customers are running 100% on HyperFlex,” Kim said.

Hyper-convergence relies on software-defined storage to eliminate the need for dedicated storage. An HCI system packages all the necessary computing resources — CPUs, networking, storage and virtualization tools — as a single integrated appliance. That’s different from converged infrastructure, in which customers can buy different components by the rack and bundle them together with a software stack.

We see a lot of customers running mission-critical applications, and some customers are running 100% on HyperFlex.
Eugene KimHyperFlex product marketing manager, Cisco

Cisco hyper-converged products were late to market compared with other HCI vendors. Cisco introduced HyperFlex in 2016 in partnership with Springpath, bundling the startup’s log-structured distributed file system with integrated Cisco networking. Cisco was an early investor in Springpath and eventually acquired it in 2017.

Cisco’s HCI market share jumped from 2.5% in the fourth quarter of 2016 to 4.5% in the fourth quarter of 2017, according to IDC. HyperFlex sales generated more than $56 million — a 200% increase year over year. Still, Cisco was in fourth place behind Dell, Nutanix and Hewlett Packard Enterprise in HCI hardware share, according to IDC.

As part of its partnership with Intel, Cisco added Intel Volume Management Device to HyperFlex 3.5. Intel VMD allows NVMe devices to be swapped out of the PCIe bus, avoiding a system shutdown.

Much of the heavy lifting for Cisco hyper-converged infrastructure was handled with the HyperFlex 3.0 release in January. It added Microsoft Hyper-V to support in addition to existing support for VMware hypervisors and the Cisco Container volume driver to launch persistent storage containers with Kubernetes.

Owning the compute, network and storage software gives Cisco hyper-converged systems an advantage over traditional hardware-software HCI bundles, said Vikas Ratna, a product manager at Cisco.

“We believe being able to optimize the stack up and down provides the best on-ramp for customers [to adopt HCI]. We don’t have to overengineer, as we would if we just owned the software layer,” Ratna said.

Customers can scale Cisco HyperFlex to 64 nodes per cluster. Ratna said Cisco plans to release a 2U HyperFlex that scales to 64 TB of raw storage per node when larger NVMe SSDs are generally available.

Tegile IntelliFlash array family welcomes all-NVMe sibling

Tegile Systems has broadened its storage arrays with two models designed for nonvolatile memory express flash.

The Tegile IntelliFlash N-5000 Series arrays are expected to be generally available in the fourth quarter. The unified block and file all-flash arrays bundle nonvolatile memory express (NVMe) SSDs from Western Digital-owned HGST with the Tegile IntelliFlash operating system.

N Series arrays include two forms of memory: double data rate fourth-generation (DDR4) synchronous DRAM and nonvolatile dual in-line memory modules (NVDIMMs). The dual-controller arrays pack 24 PCIe NVMe SSDs in 2U.

The high-end Tegile IntelliFlash N5800 scales from 76 TB to 153 TB of raw NVMe flash. Outfitted with 3 TB of DDR4 memory and 64 GB of NVDIMM memory per system, the N5800 is aimed at big data analytics and streaming real-time workloads. Endurance is rated at three drive writes per day.

The N5200 building block is designed for high IOPS workloads that can tolerate lower performance. Raw storage scales from 23 TB to 46 TB. A single N5200 system supports 768 TB of DDR4 and 16 GB of NVDIMM capacity, and the system is rated for one drive write per day.

This is like taking 10 racks of legacy storage gear and compressing it down to one rack that can deliver an order-of-magnitude performance increase.
Narayan Venkatchief marketing officer, Tegile

Tegile claims a rack of IntelliFlash NVMe arrays can handle more than 60 million IOPS in a response time of 200 microseconds. Rack capacity scales to 14 PB following deduplication and compression. IntelliFlash OS provides at-rest data encryption and data protection with clones, snapshots and replication.

“Speed and feeds aside, this is like taking 10 racks of legacy storage gear and compressing it down to one rack that can deliver an order-of-magnitude performance increase,” said Narayan Venkat, Tegile chief marketing officer.

Tegile pins application metadata to fast tier of flash

The NVMe protocol bypasses the latency associated with the traditional iSCSI stack. Rather than using storage semantics to address an application, an NVMe system addresses bits and bytes with memory semantics. The change is designed to deliver higher levels of parallelism and throughput at ultra-low latency.

“With NVMe, you’ve got 64,000 different data paths into each one of those drives. The ability to parallelize that workload in the software makes NVMe even more interesting,” said Eric Burgener, a research director for storage at IT analyst firm IDC. “That is one area for differentiation on the software side. It would allow systems to get more performance out of NVMe technologies.”

Customers can mix and match Tegile hybrid and all-flash arrays. The Tegile software stack separates an application’s data and metadata, locking the metadata on a tier of high-performance flash. Most systems write application data and metadata at the same time, which requires the extra step of accessing disk or DRAM cache to retrieve file metadata.

Venkat said Tegile lays out data to maximize the performance of the underlying storage media.

Tegile started shipping storage arrays in 2012. The new NVMe arrays slot between the Tegile IntelliFlash HD multi-tiered arrays and Tegile IntelliStack converged infrastructure. Tegile’s flagship is the T4000 Series of hybrid flash and all-flash arrays.

Tegile N-5000
Tegile N-5000 NVMe all-flash array.

NVMe scrimmage: Proprietary PCIe mesh vs. custom NVMe SSDs

The NVMe ecosystem is evolving, setting up alternative approaches to installing NVMe flash in the chassis. Pure Storage’s FlashArray//X incorporates proprietary NVMe modules directly on blades. Conversely, Dell EMC in March discontinued its DSSD D5 product with a custom-designed NVMe mesh.

Tegile IntelliFlash products originally were designed on the InfiniFlash all-flash chassis created by SanDisk, also part of Western Digital. That enterprise MLC-based flash product has since been scrapped.

“Tegile is an example of the overall trend [of vendors starting to] move away from the use of custom hardware designs in these high-performance systems,” Burgener said. “In terms of the cost per gigabyte and cost per IOPS, they bring a pretty good value to the table. Tegile systems don’t scale as high in terms of overall IOPS as some other systems, but their metadata implementation is a nice little differentiator.”

Burgener said that because Western Digital is a Tegile strategic investor, Tegile has a guaranteed supply of NVMe at a reduced unit cost even during the current NAND shortage.

Tegile’s all-flash NVMe arrays execute writes in persistent memory. To achieve a high ratio of cache hits, multiple read-write requests get bundled in transaction groups. Random writes are automatically converted to sequential writes. Hot data remains in nonvolatile memory. Tegile’s adaptive algorithm handles multiple I/O patterns.

The new Tegile storage runs the NVMe protocol atop an internal PCIe fabric. Venkat said it will allow Tegile to extend its embedded PCIe as NVMe over Fabrics specifications mature.

“We have the ability to literally flip a switch and export capacity as an NVMe target, just like Fibre Channel, iSCSI, NFS and SMB are objects,” Venkat said.

Writable shared flash volumes highlight E8 Storage upgrade

Startup E8 Storage has sharpened the focus of its nonvolatile memory express all-flash arrays, adding support for parallel file systems in a bid to boost scalability and shared flash storage.

The upgrade allows users to scale capacity beyond a single appliance by allowing host machines to access multiple E8 Storage appliances. The enhanced E8 Storage software supports shared writable volumes, which the vendor claims allows 96 clustered hosts to read and write to the same volume in parallel at line speed. That feature is geared initially to organizations running IBM Spectrum Scale — formerly IBM General Parallel File System — and Oracle Real Application Cluster (RAC) environments, although shared flash has implications for any parallel file system used in technical computing.

The vendor this week also previewed E8-X24 block arrays at the Flash Memory Summit in Santa Clara, Calif. The X-24 is a companion to the flagship E8 Storage D-24 rack-scale flash system that it launched last year. The X-24 will allow customers to mix and match NAND flash and storage-class memory in the same box. E8 Storage said X-24 proofs of concept are underway at cloud providers, financial services and travel industry firms. The X-24 array is expected to be generally available in the fourth quarter.

“The focus of this release is to increase the agility of our system for application acceleration. We’re supporting more parallel file architectures to help customers get the most processing power and move away from serial access to data,” said Julie Herd, director of technical marketing for E8 Storage.

Shared writable volumes connect multiple hosts to back end

The nonvolatile memory express (NVMe) host controller interface is designed to speed data transfer between host systems and flash media. The NVMe protocol transmits the packets across the PCI Express interconnect, bypassing the traditional network hops between networking components.

The E8 Storage shared flash block system uses dual-ported server and rack hardware from OEM AIC Inc. It supports 24 7.68 TB capacities, which scales storage to 140 TB of usable flash per rack. Drives connect via a Remote Direct Memory Access over Converged Ethernet high-performance fabric. E8 client software handles dynamic LUN, RAID 6 schemes and thin provisioning.

Although the concept of sharing a volume isn’t a new idea, supporting it with block storage is a challenge. It requires vendors to enable software capabilities in the storage layer, particularly a locking mechanism to allow clustered servers to simultaneously read and write results to the same volume, without interfering with one another.

In its rack-scale deployment, each server sees E8 Storage servers as local block storage. A parallel file system writes data to those servers at the host level. The E8 agent responds to lock calls to prevent data collisions, as multiple hosts attempt to access the volume in real time.

“This was one of the early-on requests we had from customers: the ability to have read and write access to shared flash. We’ve had it in test with IBM Spectrum Scale for a couple months. Now, we’re ready to launch,” Herd said.

Eric Burgener, a storage analyst with IT firm IDC, said E8 Storage offers a potential alternative to the Oracle Exadata in-memory product that supports large Oracle RAC deployments, which require underlying high-performance storage. Oracle does not have an end-to-end NVMe implementation for Exadata.

“For a company the size of E8 Storage, selling even 10 systems in a year into Oracle RAC environments would be a pretty big deal. They have a better performance than Oracle Exadata and cost about one-third less. Now is the time for E8 to get into those environments that will be looking to refresh every quarter,” Burgener said.

Other potential use cases for E8 to pursue involve parallel file-system-based technical computing for big data, fraud detection, life sciences, online transaction processing and seismic processing, Burgener said.

Choose between flash, SCM, with dedicated RAID

Herd said E8 Storage is testing the forthcoming X-24 array with Intel’s Optane-based storage-class memory SSDs. The Optane drives provide a persistent memory cache designed to mimic the performance of an in-memory database.

Rather than an in-memory cluster accessing servers across a network, E8 said its architecture provides better scalability by eliminating dedicated storage into the servers. Dedicated network links ensure each tier of storage gets sufficient bandwidth.

One feature lacking is dynamic tiering between shared flash and storage-class memory. Herd said E8 Storage customers will have to determine which database apps require in-memory-like performance.

The upgrade allows host to access multiple E8 Storage appliances. Initially, customers could connect 96 host servers to the appliance. The new configuration allows NAND flash and Intel Optane SSDs to be shared across D-24 and X-24 arrays. Instead of one large RAID configuration, customers could create smaller, multiple RAID groups and dedicate each to a specific cluster.

E8 Storage is among a handful of startup vendors trying to peddle fast and scalable, shared flash storage using off-the-shelf NVMe drives. Other entrants include Apeiron Data Systems and software-defined Excelero. Two other hopefuls, Pavilion Data Systems and Vexata, have yet to formally unveil their storage gear.