Tag Archives: sights

SAP C/4HANA hopes to tie together front and back office

ORLANDO, Fla. — SAP is setting its sights on Salesforce with a new suite of customer experience products called SAP C/4HANA.

Unveiled at the opening keynote here at SAP Sapphire Now, SAP C/4HANA brings together SAP’s marketing, commerce, sales and service cloud products, sitting them all atop its Customer Data Cloud and embedding machine learning with SAP Leonardo.

“SAP was the last to accept the status quo, and SAP will be the first to change it,” said Bill McDermott, CEO for SAP. “We’re moving from a 360-degree view of sales automation to a 360-degree view of the customer. The entire supply chain is connected to customer experience.”

SAP is hoping that by connecting back-office capabilities with SAP ERP products to the front office, the company can provide an end-to-end experience for its users — something that few vendors can offer. SAP executives called the release of SAP C/4HANA a reflection point for SAP and the CRM industry.

“The roadmap for Hana and S/4Hana gave us what we needed to connect the back office to the front office,” McDermott said.

In addition to connecting back-office functionality, SAP’s new CX suite was also spurred by the separate acquisitions of Hybris, Gigya and CallidusCloud, which added the capabilities necessary to bring together these products.

“The goal is a single view of the customer,” said Alex Atzberger, president of customer experience for SAP. “With the acquisition of Gigya, we manage 1.3 billion profiles, and this is what’s happening in CRM. It’s about effectiveness and efficiency and how can you effectively target and engage a particular customer.”

Atzberger added that this customer engagement needs to keep the customer in mind first and foremost, meaning it can’t be creepy when it comes to courting a customer, but rather provide users with the tools to move a customer along the entire marketing, sales and service pipeline.

We’re moving from a 360-degree view of sales automation to a 360-degree view of the customer.
Bill McDermottCEO, SAP

It has been a long-standing goal of SAP’s to combine its industry-leading ERP tools with its CRM tools — being the first major vendor to combine front- and back-office capabilities — and while time will tell whether SAP can achieve this with C/4HANA, it appears the company is on the right track.

“They’ve been saying this for years, so what changed? I really think they’re finally executing on what they want to do and the architecture caught up and the acquisitions helped tie it together,” said Sheryl Kingstone, research vice president at 451 Research. “This ties to their cloud platform, and it was critical for that vision they have to connect the dots. These are things that Salesforce is trying to figure out in regards to the 360-degree customer view.”

While SAP admitted it was slow to adapt to this modern view of the customer, it’s hoping that by stringing together this suite of applications, it can provide the customer experience businesses are vying for.

“It’s not only about connecting that end-to-end chain, but also to give the best user experience in the industry,” McDermott said. “SAP is capable of doing this, and now we’re ready.”

The importance of SAP’s various acquisitions over the past couple of years can’t be understated when it comes to creating SAP C/4HANA. The 2017 purchase of Gigya for $350 million became the data management platform for SAP, helping customers maintain and protect customer data. The SAP acquisition of CallidusCloud earlier this year for $2.4 billion gave the company a modern, cloud-based sales, quote-to-cash and customer experience product that helps round out those front-office offerings that can complement SAP’s existing ERP products.

“The Gigya acquisition is really essential for that vision of [customer identification]. And managing that identity in a secure environment — especially with GDPR — is critical,” Kingstone said. “That plus bringing in their data management capabilities and machine learning with SAP Leonardo — if they can pull this off, that’s the next generation in a modern architecture.”

Pricing information regarding SAP C/4HANA wasn’t released at the unveiling.

AWS Cloud9 IDE threatens Microsoft developer base

As cloud platform providers battle for supremacy, they’ve trained their sights on developers to expand adoption of their services.

A top issue now for leading cloud platforms is to make them as developer-friendly as possible to attract new developers, as both Microsoft and Amazon Web Services have done. For instance, at its re:Invent 2017 conference last month, the company launched AWS Cloud9 IDE, a cloud-based integrated development environment that can be accessed through any web browser. That fills in a key missing piece for AWS as it competes with other cloud providers — an integrated environment to write, run and debug code.

“AWS finally has provided a ‘living room’ for developers with its Cloud9 IDE,” said Holger Mueller, an analyst at Constellation Research in San Francisco. That fills a void for AWS as it competes with other cloud providers — especially Microsoft, which continues to extend its longtime strengths of developer tools and relationships with the developer community into the cloud era.

Indeed, for developers that have grown up in the Microsoft Visual Studio IDE ecosystem, Microsoft Azure is a logical choice as the two have been optimized for one another. However, not all developers use Visual Studio, so cloud providers must deliver an open set of services to attract developers. Now, having integrated the Cloud9 technology it acquired last year as the Cloud9 IDE, AWS has an optimized developer platform of its own.

AWS Cloud9 IDE adoption 

“There is no doubt we will use it,” said Chris Wegmann, managing director of the Accenture AWS Business Group at Accenture. “We’ve used lots of native tooling. There have been gaps in the app dev tooling for a while, but some third parties, like Cloud9, have filled those gaps in the past. Now it is part of the mothership.”

Forrester analyst Michael FacemireMichael Facemire

With the Cloud9 IDE, AWS offers developers an IDE experience focused on their cloud versus having them use their top competitor’s IDE with an AWS-focused toolkit, said Rhett Dillingham, an analyst at Moor Insights & Strategy in Austin, Texas.

“[They] are now providing an IDE with strong AWS service integration, for example, for building serverless apps with Lambda, as they build out its feature set with real-time paired-programming and direct terminal access for AWS CLI [command-line interface] use,” he said.

That integration is key to lure developers away from their familiar development environments.

“When I saw the news about the Cloud9 IDE I said that’s great, there’s another competitor in this market,” said Justin Rupp, systems and cloud architect at GlobalGiving, a crowdfunding organization in Washington, D.C. Rupp uses Microsoft’s popular Visual Studio Code tool, also known as VS Code, a lightweight code editor for Windows, Linux and macOS.

The challenge for AWS is to attract developers that already like the tool they’re using, and that’ll be a tall order, said Michael Facemire, an analyst at Forrester Research in Cambridge, Mass. “I’m a developer myself and I’m not giving up VS Code,” he said.

That’s been the knock against AWS, that they provide lots of cool functionality, but no tooling. This starts to address that big knock.
Michael Facemireanalyst, Forrester Research

For now, Cloud9 IDE is a “beachhead” for AWS to present something for developers today, and build it up over time, Facemire said. For example, to tweak a Lambda function, a developer could just pull up the cloud editor that Amazon provides right there live, he said.

“That’s been the knock against AWS, that they provide lots of cool functionality, but no tooling,” Facemire said. “This starts to address that big knock.”

Who is more developer-friendly?

AWS’ reputation is that it’s not the most developer-friendly cloud platform from a tooling perspective, which hardcore, professional developers don’t require. But as AWS has grown and expanded, it’s become friendlier to the rest of the developer community because of its sheer volume and consumability. And the AWS Cloud9 IDE appeals to developers that fit in between the low-code set and the hardcore pros, said Mark Nunnikhoven, vice president of cloud research at Dallas-based Trend Micro.

“The Cloud9 tool set is firmly in the middle, where you’ve got some great visualization, you’ve got some great collaboration features, and it’s really going to open it up for more people to be able to build on the AWS cloud platform,” he said.

Despite providing a new IDE to its developer base, AWS must do more to win their complete loyalty.

AWS Cloud9 IDE supports JavaScript, Python, PHP and more, but does not have first-class Java support, which is surprising given how many developers use Java. Secondly, Amazon chose to not use the open source Language Server Protocol (LSP), said Mike Milinkovich, executive director of the Eclipse Foundation, which has provided the Eclipse Che web-based development environment since 2014. Eclipse Che supports Java and has provided containerized developer workspaces for almost two years.

AWS will eventually implement Java support, but it will have to do it themselves from scratch, he said. Instead, if they had participated in the LSP ecosystem, they could have had Java support today based on the Eclipse LSP4J project, the same codebase with which Microsoft provides Java support for VS Code, he said.

This proprietary approach to developer tools is out of touch with industry best practices, Milinkovich said. “Cloud9 may provide a productivity boost for AWS developers, but it will not be the open source solution that the industry is looking for,” he said.

Constellation Research’s Mueller agreed, and noted that in some ways AWS is trying to out-Microsoft Microsoft.

“It’s very early days for AWS Cloud9 IDE, and AWS has to work on the value proposition,” he said. “But, like you have to use Visual Studio for Azure to be fully productive, the same story will repeat for Cloud9 in a few years.”

Azure migration takes hostile approach to lure VMware apps

The two biggest public cloud providers have set their sights on VMware workloads, though they’re taking different approaches to accommodate the hypervisor heavyweight and its customers.

A little over a year after Amazon Web Services (AWS) and VMware pledged to build a joint offering to bridge customers’ public and private environments, Microsoft this week introduced a similar service for its Azure public cloud. There’s one important distinction, however: VMware is out of the equation, a hostile move met with equal hostility from VMware, which said it would not support the service.

Azure Migrate offers multiple ways to get on-premises VMware workloads to Microsoft’s public cloud. Customers now can move VMware-based applications to Azure with a free tool to assess their environments, map out dependencies and migrate using Azure Site Recovery. Once there, customers can optimize workloads for Azure via cost management tools Microsoft acquired from Cloudyn.

This approach eschews the VMware virtualization and adapts these applications into a more cloud-friendly architecture that can use a range of other Azure services. A multitude of third-party vendors offer similar capabilities. It’s the other part of the Azure migration service that has drawn the ire of VMware.

VMware virtualization on Azure is a bare-metal subset of Azure Migrate that can run a full VMware stack on Azure hardware. It’s expected to be generally available sometime next year. This offering is a partnership with unnamed VMware-certified partners and VMware-certified hardware, but it notably cuts VMware out of the process, and out of the revenue stream.

In response, VMware criticized Microsoft characterization of the Azure migration service as part of a transition to public cloud. In a blog post, Ajay Patel, VMware senior vice president, cited the lack of joint engineering between VMware and Microsoft and said the company won’t recommend or support the product.

This isn’t the first time these two companies have butted heads. Microsoft launched Hyper-V almost a decade ago with similar aggressive tactics to pull companies off VMware’s hypervisor, said Steve Herrod, who was CTO at VMware at the time. Herrod is currently managing director at venture capital firm General Catalyst.

Part of the motivation here could be Microsoft posturing either to negotiate a future deal with VMware or to ensure it doesn’t lose out on these types of migration, Herrod said. And of course, if VMware had its way, its software stack would be on all the major clouds, he added.

Jeff Kato, analystJeff Kato

VMware on AWS, which became generally available in late August, is operated by VMware, and through the company’s Cloud Foundation program ports its software-defined data centers to CenturyLink, Fujitsu, IBM Cloud, NTT Communications, OVH and Rackspace. The two glaring holes in that swath of partnerships are Azure and Google Cloud, widely considered to be the second and third most popular public clouds behind AWS.

Companies have a mix of applications, some of which are well-suited to transition to the cloud, and others must stay inside a private data center or can’t be re-architected for the cloud. Hence, a hybrid cloud strategy has become an attractive option, and VMware’s recent partnerships have made companies feel more comfortable with the public cloud and curb the management of their own data centers.

“I talk to a lot of CIOs and they love the fact that they can buy VMware and now feel VMware has given them the all-clear to being in the cloud,” Herrod said. “It’s purely promise that they’re not locked into running VMware in their own data center that has caused them to double down on VMware.”

The fact that they have to offer VMware bare metal to accelerate things tells you there are workloads people are reluctant to move to the public cloud, whether that’s on Hyper-V or even AWS.
Jeff Katoanalyst, Taneja Group

VMware virtualization on Azure is also an acknowledgement that some applications are not good candidates for the cloud-native approach, said Jeff Kato, an analyst at Taneja Group in Hopkinton, Mass.

“The fact that they have to offer VMware bare metal to accelerate things tells you there are workloads people are reluctant to move to the public cloud, whether that’s on Hyper-V or even AWS,” he said.

Some customers will prefer VMware on AWS, but it won’t be a thundering majority, said Carl Brooks, an analyst at 451 Research. There’s also no downside for Microsoft to support what customers already do, and the technical aspect of this move is relatively trivial, he added.

“It’s a buyer’s market, and none of the major vendors are going to benefit from trying to narrow user options — quite the opposite,” Brooks said.

Perhaps it’s no coincidence that Microsoft debuted the Azure migration service in the days leading up to AWS’ major user conference, re:Invent, where there is expected to be more talk about the partnership between Amazon and VMware. It’s also notable that AWS is only a public cloud provider, so it doesn’t have the same level of competitive friction as there has been historically between Microsoft and VMware, Kato said.

“Microsoft [is] trying to ride this Azure momentum to take more than their fair share of [the on-premises space], and in order to do that, they’re going to have to come up with a counter attack to VMware on AWS,” he said.

Despite VMware’s lack of support for the Azure migration service, it’s unlikely it can do anything to stop it, especially if it’s on certified hardware, Kato said. Perhaps VMware could somehow interfere with how well the VMware stack integrates with native Azure services, but big enterprises could prevent that, at least for their own environments.

“If the customer is big enough, they’ll force them to work together,” Kato said.

Trevor Jones is a senior news writer with SearchCloudComputing and SearchAWS. Contact him at tjones@techtarget.com.

Sierra-Cedar unveils new survey of HR technology trends

LAS VEGAS — With their sights fixed firmly on the cloud, HR managers and IT purchasers are placing greater emphasis on data security, a personalized user experience, and improved learning and development. Those are among the top HR technology trends highlighted in the just-released Sierra-Cedar 2017-2018 HR Systems Survey, which for the first time also attempted to measure the impact of socially responsible policies on corporate bottom lines.

Sierra-Cedar Inc. has made a tradition of unveiling the survey results at the HR Technology Conference & Exposition, and it did so again here at the 2017 annual gathering. The survey celebrated its 20th anniversary this year.

“We’re having the same type of change today that we had 20 years ago,” said Stacey Harris, vice president of research and analytics for the consulting firm, which is based in Alpharetta, Ga. “The market is flipping on its head.”

Harris showed a chart of the top HR technology trends and purchase priorities in the survey’s history, noting that the HR automation and employee self-service of the early days are matched by today’s cloud, mobile and social HR in their ability to transform organizations.

She said the two-decade evolution in HR technology trends reflects HR’s own shift from a focus on its own processes to being more a driver of outcomes for the entire business. “It isn’t just about doing HR better,” she said. “It’s about doing HR for a purpose.”

But the practical aspects of delivering HR and talent management improvements remain at the forefront. “Integration strategies and risk and security strategies are roiling to the top,” Harris told conference attendees. “If you don’t have something going on in [those areas], you’re one of the few organizations that we talked to.”

Sierra-Cedar conducted the survey of 1,312 organizations last spring. Just over half of them were small organizations (2,500 or fewer employees), with medium-sized organizations (2,500 to 10,000 employees) representing around a quarter of respondents and large organizations (10,000-plus) the remaining quarter.

Social responsibility correlates with HR technology trends

In previous surveys, the consulting firm compared the primary criteria an organization uses to make decisions — categorizing it as either top-performing in the financial sense, talent-driven or data-driven — and then measured the effect of that management style on positive business outcomes, such as return on equity.

For this year’s survey, social responsibility was added for the first time as a decision-making style. Its impact on business outcomes proved significant.

Sierra-Cedar found organizations that emphasized diversity, wellness, flexible schedules, family leave and employee engagement performed 14% better than a control group. Talent-driven organizations — those with mature career- and succession-planning processes and which were rated strong in such talent metrics as employee retention and engagement — also saw a 14% advantage. Organizations that emphasized financial or data-driven decision-making performed 8% and 3% better, respectively.

“Social responsibility has become such a big issue, both in our headlines and in our talking points and in our businesses and in our brands,” Harris said. “You can’t ignore it. Some organizations are taking it to a whole new level,” and technology is helping them to address it.

Other survey data showed the most socially responsible organizations were much more likely to have highly rated HR processes for performance and compensation management and onboarding. 

HR technology trends reflected in purchase intentions

A significant portion of the survey deals with respondent’s purchase intentions. While all seven major HR technology categories that Sierra-Cedar tracks showed slight growth, the percentage growth over last year was greatest in talent management and in business intelligence and analytics.

One category getting significant attention from purchasers is learning and development tools. The survey report noted that learning management systems (LMSes) are the oldest HR systems — second only to core HR and payroll — and have been installed for five years on average.

LMSes are being considered for change at a higher rate than other applications today.
Stacey Harrisvice president of research and analytics, Sierra-Cedar

“LMSes are being considered for change at a higher rate than other applications today, with 14% of organizations planning for replacement in the next 24 months, and 24% evaluating other solutions,” the report said. A better user experience, new functionality and improved integration were by far the top hopes for new LMSes.

The report also found a strong need to integrate the many HR systems and applications that most organizations have. The average organization has 18 integration touch points, though the number varies widely by size, with large organizations averaging 62 touch points and small ones only five. Of survey respondents, 20% had a major initiative to improve system integration, and 10% are working on one, while 17% already have a regularly updated enterprise integration strategy. But that still leaves around half with no real integration strategy, which Sierra-Cedar suggested is a missed opportunity, because organizations that have a strategy earn 21% higher ratings for their business outcomes.

Enterprise integration strategies also proved to be positive contributors in organizations with effective processes to protect HR data privacy and security. In addition, 70% of the organizations in the top tier for business outcomes have a regularly updated risk and security strategy, according to Sierra-Cedar.

The survey also tracked HR technology trends when it comes to the IT architectures that organizations planned to use to transform their HR systems. The results showed 22% planned the rip-and-replace approach, moving everything at once to the cloud, while 25% planned a hybrid setup, with talent management and workforce management typically in the cloud and the rest remaining on premises. Another 22% planned to run similar applications in parallel in both deployment models, while 19% chose to outsource HR to service providers.

The report provided further evidence of the inexorable march to SaaS HR applications and the increased importance of human capital management that is personalized for each employee. “If you haven’t talked about personalization yet, make sure you put that in your notes,” Harris said. “Personalization is going to be the next big thing.”

The 122-page report is free for download after registration and a short questionnaire.

AWS hybrid cloud push delves into load balancing

AWS has set its sights on networking in its latest bid to address hybrid workloads.

A year ago, Amazon added Application Load Balancer to move routing capabilities up the stack to the application layer, a move to appeal to modern workloads rather than underlying infrastructure. Now, Amazon has extended those capabilities beyond its own data centers for AWS hybrid cloud applications, disaster recovery or migrations.

Previously, Application Load Balancer could only route traffic to Elastic Compute Cloud instances, often for microservices that run in containers on AWS. Now, customers can use it to route traffic directly to their private data centers via an IP address. It also can spread traffic to web servers or databases on multiple Virtual Private Clouds within a region. A single instance can host more than one service, with containers with multiple interfaces or security groups, or services with a common port number and distinct IP addresses.

IT can distribute traffic on premises or in AWS with a single load balancer or with separate load balancers for each destination. Amazon CloudWatch can automatically track metrics, and users can perform health checks on individual load balancers.

This new capability is designed for AWS hybrid cloud applications, and it continues the cloud provider’s strategic shift to address users that need or want to maintain a presence on premises. Days ago, VMware Cloud on AWS became available, which represents the public cloud giant’s largest foray yet into supporting workloads beyond its own data centers.

Amazon needed to address the fact that enterprise environments are a mix of public cloud and private data centers, and this tool could be particularly beneficial to users who are in the midst of a transition, said Dan Conde, an analyst at Enterprise Strategy Group in Milford, Mass.

“Most customers will not make a sudden transition where all workloads magically appear on AWS,” he said. “It takes time to make the move.”

The Application Load Balancer service is better suited for containerized workloads than AWS’ Classic Load Balancer, for example, to enable multiple containers on an instance, said Adam Book, principal architect at Relus Technologies, an IT services provider and AWS consultancy in Peachtree Corners, Ga. Moreover, containers generate random ports, and Application Load Balancer will force traffic to the appropriate target group.

“We’re big on containers and making the move to container-based microservices, and the Application Load Balancer is a big key in doing that,” he said.

Still, the move out of the data center is a multistep process, so routing traffic directly to an IP address along a secure connection could be a viable way for enterprises to architect AWS hybrid cloud applications, Book said.

AWS pushes into yet another IT sector

Extending Application Load Balancer capabilities beyond Amazon’s data centers pulls AWS into the application delivery controller (ADC) market, said Brad Casemore, an analyst with IDC. But to AWS, it’s just another area to reduce friction for its users, not a competitive goal to be an ADC vendor for all workloads in the enterprise, he said.

Nevertheless, while this upgrade doesn’t address multicloud architectures as do some of the vendors in this space, it certainly could eat into a market where providers such as VFI, Citrix NetScaler and A10 Networks likely didn’t expect or want AWS to deliver on-premises capabilities, Casemore said.

This also is part of AWS’ as-a-service continuum that’s focused squarely on developers in the enterprise who embrace AWS’ hybrid cloud, he added.

“Obviously, this isn’t speaking to legacy IT folks who have stood up ADCs for their client-server apps from time immemorial,” Casemore said. “[AWS] is looking at this new wave of apps that are coming now, and they want to make sure there’s a smooth conduit between what happens in the enterprise and in the cloud.”

Trevor Jones is a senior news writer with SearchCloudComputing and SearchAWS. Contact him at tjones@techtarget.com.

Flash array vendor Nimbus Data drives into OEM channel

Nimbus Data, an all-flash array pioneer that failed to make it big, now has its sights set on supplying custom flash drives to OEMs.

Nimbus, maker of the ExaFlash all-flash array, has waded into the OEM supply chain with its ExaDrive solid-state drive reference design aimed at drive and server vendors. ExaDrive foregoes traditional SSD architectures that are based on a single monolithic chip. The multiprocessor ExaDrive SSD packages NAND flash and software intelligence internally in 3.5-inch form factor.

ExaDrive uses a distributed parallel architecture to incorporate data features and functionality in a grid of application-specific integrated circuits (ASICs). The drives offload error correction and flash management to a series of embedded microcontrollers. An intelligent processor delegates capacity management and wear leveling across the multiple ASICs.

The processor presents all the flash capacity to a host as a dual-ported enterprise SAS SSD that can fit into disk-based storage environments. Nimbus Data buys its NAND flash from chipmaker SK Hynix under a supplier agreement the companies signed last year.

ExaDrive SSDs are the same solid-state drives used in several of Nimbus Data’s ExaFlash all-flash arrays that launched in August 2016. ExaDrive SSDs are available in 25 TB and 50 TB capacities, making them a better fit for archiving than high-performing primary storage, where flash usually shows up.

Nimbus Data’s goal is still to replace hard disk drives with flash; although, the approach is different.

“The adoption of flash in the data center is very much in its infancy,” said Tom Isakovich, CEO of Nimbus, based in Irvine, Calif. “There are 40 million units of 3.5-inch hard disk drives still shipping per year. It’s an enormous market. Our goal is to bring flash into that market and begin eating away [at disk sales.]”

Viking Technology, a division of publicly held Sanmina Corp., started shipments of its UHC-Silo SSDs based on ExaDrive technology in July. Smart Modular Technologies this month became the second OEM to introduce branded ExaDrive SSDs. Smart Modular’s Osmium family incorporates multi-level cell NAND.

“The Nimbus product is really a plug-and-play replacement for existing hard drives. We have been asked by our customers to supply larger capacity drives, and the partnership with Nimbus will help us build out the product line,” said Victor Tsai, director of marketing for Smart Modular’s flash products.

Nimbus Data's ExaDrive SSD
ExaDrive is Nimbus Data’s OEM high-performance SSD aimed at archiving and cloud service providers.

Nimbus Data: ExaDrive ‘reinvents’ SSD design

ExaDrive SSDs are the same SSDs used in Nimbus Data’s ExaFlash C-series and D-series all-flash arrays. ExaFlash C-series is geared toward cloud service providers, while the dense ExaFlash D line packs up to 4.5 PB of raw flash in a single rack.

ExaDrive SSD also will be marketed to cloud service providers and enterprises that run high-capacity flash in standard servers; although, Isakovich said Nimbus does not plan to market its own branded SSDs directly to end users.

Nimbus Data claims a 50 TB ExaDrive consumes 0.14 watts of power per terabyte. A standard rack packed with ExaDrive SSDs provides 52 PB of raw capacity. The drives have an implied life span of 10 years and promise data retention up to six months. The capacity and scale, if achieved, would represent a dramatic improvement over SSDs currently available from legacy drive manufacturers.

Dennis Martin, president of computer analyst firm Demartek LLC in Golden, Colo., whose lab ran performance tests on ExaDrive SSDs, said hyperscale data centers have started placing more data on flash “because it’s the only thing that can keep up.”

“I wrote an article [in 2013] called ‘Horses, Buggies and SSDs,’ in which I [predicted] we would see flash getting used as an archive device. That’s sort of what ExaDrive is going for,” Martin said. “The performance isn’t superfast, but it’s competitive. The big thing is the capacity. This is a standard 12 gigabit per second SAS drive, but it’s a 3.5-inch SSD with 50 TB.”

Flash storage offers performance and latency improvements over spinning disk. All-flash arrays concentrate management intelligence either in an operating system or inside array controllers. But even flash systems struggle to keep pace with soaring data growth and the accompanying demand for increased computational power to deliver inline data services.

Nimbus has run lean and eschewed outside investments, other than a $2 million angel round at its inception in 2007. The ExaDrive project is part of a corporate reboot. Although an early entrant in the all-flash array market — it launched its Gemini product line in 2010 — Nimbus Data quickly got overshadowed by venture-funded competitors with larger marketing and sales budgets. The company slipped out of view in 2014, leading some in the industry to assume it had ceased operations. The vendor returned a year ago with the ExaFlash, and it went quiet again until last week’s ExaDrive revelation.

Isakovich said the original goal of the ExaDrive project was to develop an exabyte-scale array and bring it to market by 2020, but that plan was foiled by the limitations of existing SSD designs. Instead, he developed ExaDrive in tandem with the ExaFlash nodes unveiled in 2016.

“There was no way to build it using off-the-shelf SSDs. We set out to reinvent everything about the way SSDs are built,” Isakovich said.