Tag Archives: analytics

Sigma analytics platform’s interface simplifies queries

In desperate need of data dexterity, Volta Charging turned to the Sigma analytics platform to improve its business intelligence capabilities and ultimately help fuel its growth.

Volta, based in San Francisco and founded in 2010, is a provider of electric vehicle charging stations, and three years ago, when Mia Oppelstrup started at Volta, the company faced a significant problem.

Because there aren’t dedicated charging stations the same way there are dedicated gas stations, Volta has to negotiate with organizations — mostly retail businesses — for parking spots where Volta can place its charging stations.

Naturally, Volta wants its charging stations placed in the parking spots with the best locations near the business they serve. But before an organization gives Volta those spots, Volta has to show that it makes economic sense, that by putting electric car charging stations closest to the door it will help boost customer traffic through the door.

That takes data. It takes proof.

Volta, however, was struggling with its data. It had the necessary information, but finding the data and then putting it in a digestible form was painstakingly slow. Queries had to be submitted to engineers, and those engineers then had to write code to transform the data before delivering a report.

Any slight change required an entirely new query, which involved more coding, time and labor for the engineers.

But then the Sigma analytics platform transformed Volta’s BI capabilities, Volta executives said.

Curiosity isn’t enough to justify engineering time, but curiosity is a way to get new insights. By working with Sigma and doing queries on my own I’m able to find new metrics.
Mia OppelstrupBusiness intelligence manager, Volta Charging

“If I had to ask an engineer every time I had a question, I couldn’t justify all the time it would take unless I knew I’d be getting an available answer,” said Oppelstrup, who began in marketing at Volta and now is the company’s business intelligence manager. “Curiosity isn’t enough to justify engineering time, but curiosity is a way to get new insights. By working with Sigma and doing queries on my own I’m able to find new metrics.”

Metrics, Oppelstrup added, that she’d never be able to find on her own.

“It’s huge for someone like me who never wrote code,” Oppelstrup said. “It would otherwise be like searching a warehouse with a forklift while blindfolded. You get stuck when you have to wait for an engineer.”

Volta looked at other BI platforms — Tableau and Microsoft’s Power BI, in particular — but just under two years ago chose Sigma and has forged ahead with the platform from the 2014 startup.

The product

Sigma Computing was founded by the trio of Jason Frantz, Mike Speiser and Rob Woollen.

Based in San Francisco, the vendor has gone through three rounds of financing and to date raised $58 million, most recently attracting $30 million in November 2019.

When Sigma was founded, and ideas for the Sigma analytics platform first developed, it was in response to what the founders viewed as a lack of access to data.

“Gartner reported that 60 to 73 percent of data is going unused and that only 30 percent of employees use BI tools,” Woollen, Sigma’s CEO, said. “I came back to that — BI was stuck with a small number of users and data was just sitting there, so my mission was to solve that problem and correct all this.”

Woollen, who previously worked at Salesforce and Sutter Hill Ventures — a main investor in Sigma — and his co-founders set out to make data more accessible. They set out to design a BI platform that could be used by ordinary business users — citizen data scientists — without having to rely so much on engineers, and one that respond quickly no matter the queries users ask of it.

Sigma launched the Sigma analytics platform in November 2018.

Like other BI platforms, Sigma — entirely based in the cloud — connects to a user’s cloud data warehouse in order to access the user’s data. Unlike most BI platforms, however, the Sigma analytics platform is a low-code BI tool that doesn’t require engineering expertise to sift through the data, pull the data relevant to a given query and present it in a digestible form.

A key element of that is the Sigma analytics platform’s user interface, which resembles a spreadsheet.

With SQL running in the background to automatically write the necessary code, users can simply make entries and notations in the spreadsheet and Sigma will run the query.

“The focus is always on expanding the audience, and 30 percent employee usage is the one that frustrates me,” Woollen said. “We’re focused on solving that problem and making BI more accessible to more people.”

The interface is key to that end.

“Products in the past focused on a simple interface,” Woollen said. “Our philosophy is that just because a businessperson isn’t technical that shouldn’t mean they can’t ask complicated questions.”

With the Sigma analytics platform’s spreadsheet interface, users can query their data, for example, to examine sales performance in a certain location, time or week. They can then tweak it to look at a different time, or a different week. They can then look at it on a monthly basis, compare it year over year, add and subtract fields and columns at will.

And rather than file a ticket to the IT department for each separate query, they can run the query themselves.

“The spreadsheet interface combines the power to ask any question of the data without having to write SQL or ask a programmer to do it,” Woollen said.

Giving end users power to explore data

Volta knew it had a data dexterity problem — an inability to truly explore its data given its reliance on engineers to run time- and labor-consuming queries — even before Oppelstrup arrived. The company was looking at different BI platforms to attempt to help, but most of the platforms Volta tried out still demanded engineering expertise, Oppelstrup said.

The outlier was the Sigma analytics platform.

“Within a day I was able to set up my own complex joins and answer questions by myself in a visual way,” Oppelstrup said. “I always felt intimidated by data, but Sigma felt like using a spreadsheet and Google Drive.”

One of the significant issues Volta faced before it adopted the Sigma analytics platform was the inability of its salespeople to show data when meeting with retail outlets and attempting to secure prime parking spaces for Volta’s charging stations.

Because of the difficulty accessing data, the salespeople didn’t have the numbers to prove that by placing charging stations near the door it would increase customer traffic.

With the platform’s querying capability, however, Oppelstrup and her team were able to make the discoveries that armed Volta’s salespeople with hard data rather than simply anecdotes.

They could now show a bank a surge in the use of charging stations near banks between 9 a.m. and 4 p.m., movie theaters a similar surge in the use just before the matinee and again before the evening feature, and grocery stores a surge near stores at lunchtime and after work.

They could also show that the charging stations were being used by actual customers, and not by random people charging up their vehicles and then leaving without also going into the bank, the movie theater or the grocery store.

“It’s changed how our sales team approaches its job — it used to just be about relationships, but now there’s data at every step,” Oppelstrup said.

Sigma enables Oppelstrup to give certain teams access to certain data, everyone access to other data, and importantly, easily redact data fields within a set that might otherwise prevent her from sharing information entirely, she said.

And that gets to the heart of Woollen’s intent when he helped start Sigma — enabling business users to work with more data and giving more people that ability to use BI tools.

“Access leads to collaboration,” he said.

Go to Original Article
Author:

Storytelling using data makes information easy to digest

Storytelling using data is helping make analytics digestible across entire organizations.

While the amount of available data has exploded in recent years, the ability to understand the meaning of the data hasn’t kept pace. There aren’t enough trained data scientists to meet demand, often leaving data interpretation in the hands of both line-of-business employees and high-level executives mostly guessing at the underlying meaning behind data points.

Storytelling using data, however, changes that.

A group of business intelligence software vendors are now specializing in data storytelling, producing platforms that go one step further than traditional BI platforms and attempt to give the data context by putting it in the form of a narrative.

One such vendor is Narrative Science, based in Chicago and founded in 2010. On Jan. 6, Narrative Science released a book entitled Let Your People Be People that delves into the importance of storytelling for businesses, with a particular focus on storytelling using data.

Recently, authors Nate Nichols, vice president of product architecture at Narrative Science, and Anna Schena Walsh, director of growth marketing, answered a series of questions about storytelling using data.

Here in Part II of a two-part Q&A they talk about why storytelling using data is a more effective way to interpret data than traditional BI, and how data storytelling can change the culture of an organization. In Part I, they discussed what data storytelling is and how data can be turned into a narrative that has meaning for an organization.

What does emphasis an on storytelling in the workplace look like, beyond a means of explaining the reasoning behind data points?

Nate NicholsNate Nichols

Nate Nichols: As an example of that, I’ve been more intentional since the New Year about applying storytelling to meetings I’ve led, and it’s been really helpful. It’s not like people are gathering around my knee as I launch into a 30-minute story, but just remembering to kick off a meeting with a 3-minute recap of why we’re here, where we’re coming from, what we worked on last week and what the things are that we need going forward. It’s really just putting more time into reminding people of why, the cause and effect, just helping people settle into the right mindset. Storytelling is an empirically effective way of doing it.

We didn’t start this company to be storytellers — we really wanted everyone to understand and be able to act on data. It turned out that the best way to do that was through storytelling. The world is waking up to this. It’s something we used to do — our ancestors sat around the campfire swapping stories about the hunt, or where the best potatoes are to forage for. That’s a thing we used to do, it’s a thing that kids do all the time — they’re bringing other kids into their world — and what’s happening is that a lot of that has been beaten out of us as adults. Because of the way the workforce is going, the way automation is going, we’re heading back to the importance of those soft skills, those storytelling skills.

How is storytelling using data more effective at presenting data than typical dashboards and reports?

Anna Schena WalshAnna Schena Walsh

Anna Schena Walsh: The brain is hard-wired for stories. It’s hard-wired to take in information in that storytelling arc, which is what is [attracting our attention] — what is something we thought we knew, what is something new that surprised us, and what can we do about it? If you can put that in a way that is interesting to people in a way they can understand, that is a way people will remember. That is what really motivates people, and that’s what actually causes people to take action. I think visuals are important parts of some stories, whether it be a chart or a picture, it can help drive stories home, but no matter what you’re doing to give people information, the end is usually the story. It’s verbal, it’s literate, it’s explaining something in some way. In reality, we do this a lot, but we need to be a lot more systematic about focusing on the story part.

What happens when you present an explanation with data?

Nichols: If someone sends you a bar chart and asks you to use it to make decisions and there’s no story with it at all, what your brain does is it makes up a story around it. Historically, what we’ve said is that computers are good at doing charts — we never did charts and graphs and spreadsheets because we thought they were helpful for people, we did them because that was what computers could do. We’ve forgotten that. So when we do these charts, people look at them and make up their own stories, and they may be more or less accurate depending on their intuition about the business. What we’re doing now is we want everyone to be really on the same story, hearing the same story, so by not having a hundred different people come up with a hundred different internal stories in their head, what we’re doing at Narrative Science is to try and make the story external so everyone is telling the same story.

So is it accurate to say that accuracy is a part of storytelling using data?

Schena Walsh: When I think of charts and graphs, interpreting those is a skill — it is a learned skill that comes to some people more naturally than others. In the past few decades there’s been this idea that everybody needs to be able interpret [data]. With storytelling, specifically data storytelling, it takes away the pressure of people interpreting the data for themselves. This allows people, where their skills may not be in that area … they don’t have to sit down and interpret dashboards. That’s not the best use of their talent, and data storytelling brings that information to them so they’re able to concentrate on what makes them great.

What’s the potential end result for organizations that employ data storytelling — what does it enable them to do that other organizations can’t?

With data storytelling there is a massive opportunity to have everybody in your company understand what’s happening and be able to make informed decisions much, much faster.
Anna Schena WalshDirector of growth marketing, Narrative Science

Schena Walsh: With data storytelling there is a massive opportunity to have everybody in your company understand what’s happening and be able to make informed decisions much, much faster. It’s not that information isn’t available — it certainly is — but it takes a certain set of skills to be able to find the meaning. So we look at it as empowering everybody because you’re giving them the information they need very quickly, and also giving them the ability to lean into what makes them great. The way we think about it is that if you can choose to have someone give a two-minute explanation of what’s going on in the business to everyone in the company everyday as they go into work, would you do it? And the answer is yes, and with data storytelling that’s what you can do.

I think what we’ll see as companies keep trying to move toward everyone needing to interpret data, I actually think there’s a lot of potential for burnout there in people who aren’t naturally inclined to do it. I also think there’s a speed element — it’s not as fast to have everybody learn this skill and have to do it every day themselves than to have the information serviced to them in a way they can understand.

Editor’s note: This interview has been edited for clarity and conciseness.

Go to Original Article
Author:

Citrix’s performance analytics service gets granular

Citrix introduced an analytics service to help IT professionals better identify the cause of slow application performance within its Virtual Apps and Desktops platform.

The company announced the general availability of the service, called Citrix Analytics for Performance, at its Citrix Summit, an event for the company’s business partners, in Orlando on Monday. The service carries an additional cost.

Steve Wilson, the company’s vice president of product for workspace ecosystem and analytics, said many IT admins must deal with performance problems as part of the nature of distributed applications. When they receive a call from workers complaining about performance, he said, it’s hard to determine the root cause — be it a capacity issue, a network problem or an issue with the employee’s device.

Performance, he said, is a frequent pain point for employees, especially remote and international workers.

“There are huge challenges that, from a performance perspective, are really hard to understand,” he said, adding that the tools available to IT professionals have not been ideal in identifying issues. “It’s all been very technical, very down in the weeds … it’s been hard to understand what [users] are seeing and how to make that actionable.”

Part of the problem, according to Wilson, is that traditional performance-measuring tools focus on server infrastructure. Keeping track of such metrics is important, he said, but they do not tell the whole story.

“Often, what [IT professionals] got was the aggregate view; it wasn’t personalized,” he said.

When the aggregate performance of the IT infrastructure is “good,” Wilson said, that could mean that half an organization’s users are seeing good performance, a quarter are seeing great performance, but a quarter are experiencing poor performance.

Steve Wilson, vice president of product for workspace ecosystem and analytics, CitrixSteve Wilson

With its performance analytics service, Citrix is offering a more granular picture of performance by providing metrics on individual employees, beyond those of the company as a whole. That measurement, which Citrix calls a user experience or UX score, evaluates such factors as an employee’s machine performance, user logon time, network latency and network stability.

“With this tool, as a system administrator, you can come in and see the entire population,” Wilson said. “It starts with the top-level experience score, but you can very quickly break that down [to personal performance].”

Wilson said IT admins who had tested the product said this information helped them address performance issues more expeditiously.

“The feedback we’ve gotten is that they’ve been able to very quickly get to root causes,” he said. “They’ve been able to drill down in a way that’s easy to understand.”

A proactive approach

Eric Klein, analyst, VDC Research GroupEric Klein

Eric Klein, analyst at VDC Research Group Inc., said the service represents a more proactive approach to performance problems, as opposed to identifying issues through remote access of an employee’s computer.

“If something starts to degrade from a performance perspective — like an app not behaving or slowing down — you can identify problems before users become frustrated,” he said.

Mark Bowker, senior analyst, Enterprise Strategy GroupMark Bowker

Klein said IT admins would likely welcome any tool that, like this one, could “give time back” to them.

“IT is always being asked to do more with less, though budgets have slowly been growing over the past few years,” he said. “[Administrators] are always looking for tools that will not only automate processes but save time.”

Enterprise Strategy Group senior analyst Mark Bowker said in a press release from Citrix announcing the news that companies must examine user experience to ensure they provide employees with secure and consistent access to needed applications.

IT is always being asked to do more with less.
Eric KleinAnalyst, VDC Research Group

“Key to providing this seamless experience is having continuous visibility into network systems and applications to quickly spot and mitigate issues before they affect productivity,” he said in the release.

Wilson said the performance analytics service was the product of Citrix’s push to the cloud during the past few years. One of the early benefits of that process, he said, has been in the analytics field; the company has been able to apply machine learning to the data it has garnered and derive insights from it.

“We do see a broad opportunity around analytics,” he said. “That’s something you’ll see more and more of from us.”

Go to Original Article
Author:

AtScale’s Adaptive Analytics 2020.1 a big step for vendor

With data virtualization for analytics at scale a central tenet, AtScale’s Adaptive Analytics 2020.1 platform was unveiled on Wednesday.

The release marks a significant step for AtScale, which specializes in data engineering by serving as a conduit between BI tools and stored data. Not only is it a rebranding of the vendor’s platform — its most recent update was called AtScale 2019.2 and was rolled out in July 2019 — but it also marks a leap in its capabilities.

Previously, as AtScale — based in San Mateo, Calif., and founded in 2013 — built up its capabilities its focus was on how to get big data to work for analytics, said co-founder and vice president of technology David Mariani. And while AtScale did that, it left the data where it was stored and queried one source at a time.

With AtScale’s Adaptive Analytics 2020.1 — available in general release immediately — users can query multiple sources simultaneously and get their response almost instantaneously due to augmented intelligence and machine learning capabilities. In addition, based on their query, their data will be autonomously engineered.

“This is not just an everyday release for us,” Mariani said. “This one is different. With our arrival in the data virtualization space we’re going to disrupt and show its true potential.”

Dave Menninger, analyst at Ventana Research, said that Adaptive Analytics 2020.1 indeed marks a significant step for AtScale.

“This is a major upgrade to the AtScale architecture which introduces the autonomous data engineering capabilities,” he said. “[CTO] Matt Baird and team have completely re-engineered the product to incorporate data virtualization and machine learning to make it easier and faster to combine and analyze data at scale. In some ways you could say they’ve lived up to their name now.”

This is not just an everyday release for us. This one is different. With our arrival in the data virtualization space we’re going to disrupt and show its true potential.
David MarianiCo-founder and vice president of technology, AtScale

AtScale has also completely re-engineered its platform, abandoning its roots in Hadoop, to serve both customers who store their data in the cloud and those who keep their data on premises.

“It’s not really about where the AtScale technology runs,” Menninger said. “Rather, they make it easy to work with cloud-based data sources as well as on premises data sources. This is a big change from their Hadoop-based, on-premises roots.”

AtScale’s Adaptive Analytics 2020.1 includes three main features: Multi-Source Intelligent Data Model, Self-Optimizing Query Acceleration Structures and Virtual Cube Catalog.

Multi-Source Intelligent Data Model is a tool that enables users to create logical data models through an intuitive process. It simplifies data modeling by rapidly assembling the data needed for queries, and then maintains its acceleration structures even as workloads increase.

Self-Optimizing Query Acceleration Structures, meanwhile, allow users to add information to their queries without having to re-aggregate the data over and over.

An organization's internet sales data is displayed on a sample AtScale dashboard.
A sample AtScale dashboard shows an organization’s internet sales data.

And Virtual Cube Catalog is a means of speeding up discoverability with lineage and metadata search capabilities that integrate natively into existing data catalogs. This enables business users and data scientists to locate needed information for whatever their needs may be, according to AtScale.

“The self-optimizing query acceleration provides a key part of the autonomous capabilities,” Menninger said. “Performance tuning big data queries can be difficult and time-consuming. However, it’s the combination of the three capabilities which really makes AtScale stand out.”

Other vendors are attempting to offer similar capabilities, but AtScale’s Adaptive Analytics 2020.1 packages them in a unique way, he added.

“There are competitors offering data virtualization and competitors offering cube-based data models, but AtScale is unique in the way they combine these capabilities with the automated query acceleration,” Menninger said.

Beyond offering a platform that enables data virtualization at scale, speed and efficiency are other key tenets of AtScale’s update, Mariani said. “Data virtualization can now be used to improve complexity and cost,” Mariani said.

Go to Original Article
Author:

Tableau analytics platform upgrades driven by user needs

LAS VEGAS — Tableau revealed a host of additions and upgrades to the Tableau analytics platform in the days both before and during Tableau Conference 2019.

Less than a week before its annual user conference, the vendor released Tableau 2019.4, a scheduled update of the Tableau analytics platform. And during the conference, Tableau unveiled not only new products and updates to existing ones, but also an enhanced partnership with Amazon Web Services to help users move to the cloud and a new partner network.

Many of the additions to the Tableau analytics platform have to do with data management, an area Tableau only recently began to explore. Among them are Tableau Catalog and Prep Conductor.

Others, meanwhile, are centered on augmented analytics, including Ask Data and Explain Data.

All of these enhancements to the Tableau analytics platform come in the wake of the news last June that Tableau was acquired by Salesforce, a deal that closed on Aug. 1 but was held up until just last week by a regulatory review in the United Kingdom looking at what effect the combination of the two companies would have on competition.

In a two-part Q&A, Andrew Beers, Tableau’s chief technology officer, discussed the new and enhanced products in the Tableau analytics platform as well as how Tableau and Salesforce will work together.

Part I focuses on data management and AI products in the Tableau analytics platform, while Part II centers on the relationship between Salesforce and Tableau.

Data management has been a theme of new products and upgrades to the Tableau analytics platform — what led Tableau in that direction?

Andrew BeersAndrew Beers

Andrew Beers: We’ve been about self-service analysis for a long time. Early themes out of the Tableau product line were putting the right tools in the hands of the people that were in the business that had the data and had the questions, and didn’t need someone standing between them and getting the answers to those questions. As that started to become really successful, then you had what happens in every self-service culture — dealing with all of this content that’s out there, all of this data that’s out there. We helped by introducing a prep product. But then you had people that were generating dashboards, generating data sets, and then we said, ‘To stick to our belief in self-service we’ve got to do something in the data management space, so what would a user-facing prep solution look like, an operationalization solution look like, a catalog solution look like?’ And that’s what started our thinking about all these various capabilities.

Along those lines, what’s the roadmap for the next few years?

Beers: We always have things that are in the works. We are at the beginning of several efforts — Tableau Prep is a baby product that’s a year and a half old. Conductor is just a couple of releases old. You’re going to see a lot of upgrades to those products and along those themes — how do you make prep easier and more approachable, how do you give your business the insight into the data and how it is being used, and how do you manage it? That’s tooling we haven’t built out that far yet. Once you have all of this knowledge and you’ve given people insights, which is a key ingredient in governance along with how to manage it in a self-service way, you’ll start to see the Catalog product grow into ideas like that.

Are these products something customers asked for, or are they products Tableau decided to develop on its own?

Beers: It’s always a combination. From the beginning we’ve listened to what our customers are saying. Sometimes they’re saying, ‘I want something that looks like this,’ but often they’re telling us, ‘Here is the kind of problem we’re facing, and here are the challenges we’re facing in our organization,’ and when you start to hear similar stories enough you generalize that the customers really need something in this space. And this is really how all of our product invention happens. It’s by listening to the intent behind what the customer is saying and then inventing the products or the new capabilities that will take the customer in a direction we think they need to go.

Shifting from data management to augmented intelligence, that’s been a theme of another set of products. Where did the motivation come from to infuse more natural language processing and machine learning into the Tableau analytics platform?

Beers: It’s a similar story here, just listening to customers and hearing them wanting to take the insights that their more analyst-style users got from Tableau to a larger part of the organization, which always leads you down the path of trying to figure out how to add more intelligence into the product. That’s not new for Tableau. In the beginning we said, ‘We want to build this tool for everyone,’ but if I’m building it for everyone I can’t assume that you know SQL, that you know color design, that you know how to tell a good story, so we had to build all those in there and then let users depart from that. With these smart things, it’s how can I extend that to letting people get different kinds of value from their question. We have a researcher in the NLP space who was seeing these signals a while ago and she started prototyping some of these ideas about how to bring natural language questioning into an analytical workspace, and that really inspired us to look deeply at the space and led us to think about acquisitions..

What’s the roadmap for Tableau’s AI capabilities?

With the way tech has been developing around things like AI and machine learning, there are just all kinds of new techniques that are available to us that weren’t mainstream enough 10 years ago to be pulling into the product.
Andrew BeersChief technology officer, Tableau

Beers: You’re going to see these AI and machine learning-style capabilities really in every layer of the product stack we have. We showed two [at the conference] — Ask Data and Explain Data — that are very much targeted at the analyst, but you’ll see it integrated into the data prep products. We’ve got some smarts in there already. We’ve added Recommendations, which is how to take the wisdom of the crowd, of the people that are at your business, to help you find things that you wouldn’t normally find or help you do operations that you yourself haven’t done yet but that your community around have done. You’re going to see that all over the product in little ways to make it easier to use and to expand the kinds of people that can do those operations.

As a technology officer, how fun is this kind of stuff for you?

Beers: It’s really exciting. It’s all kinds of fun things that we can do. I’ve always loved the mission of the company, how people see and understand data, because we can do this for decades. There’s so much interesting work ahead of us. As someone who’s focused on the technology, the problems are just super interesting, and I think with the way tech has been developing around things like AI and machine learning, there are just all kinds of new techniques that are available to us that weren’t mainstream enough 10 years ago to be pulling into the product.

Go to Original Article
Author:

Dell EMC upgrades VxRail appliances for AI, SAP HANA

Dell EMC today added predictive analytics and network management to its VxRail hyper-converged infrastructure family while expanding NVMe support for SAP HANA and AI workloads.

Dell EMC VxRail appliances combine Dell PowerEdge servers and Dell-owned VMware’s vSAN hyperconverged infrastructure (HCI) software. The launch of Dell’s flagship HCI platform includes two new all-NVMe appliance configurations, plus VxRail Analytic Consulting Engine (ACE) and support for SmartFabric Services (SFS) across multi-rack configurations.

The new Dell EMC VxRail appliance models are the P580N and the E560N. The P580N is a four-socket system designed for SAP HANA in-memory database workloads. It is the first appliance in the VxRail P Series performance line to support NVMe. The 1u E560N is aimed at high performance computing and compute-heavy workloads such as AI and machine learning, along with virtual desktop infrastructure.

The new 1U E Series systems support Nvidia T4 GPUs for extra processing power. The E Series also supports 8 TB solid-state drives, doubling the total capacity of previous models. The VxRail storage-heavy S570 nodes also now support the 8 TB SSDs.

ACE is generally available following a six-month early access program. ACE, developed on Dell’s Pivotal Cloud Foundry platform, performs monitoring and performance analytics across VxRail clusters. ACE provides alerts for possible system problems, capacity analysis and can help orchestrate upgrades.

The addition of ACE to VxRail comes a week after Dell EMC rival Hewlett Packard Enterprise made its InfoSight predictive analytics available on its SimpliVity HCI platform.

Wikibon senior analyst Stuart Miniman said the analytics, SFS and new VxRail appliances make it easier to manage HCI while expanding its use cases.

“Hyperconverged infrastructure is supposed to be simple,” he said. “When you add in AI and automated operations, that will make it simpler. We’ve been talking about intelligence and automation of storage our whole careers, but there has been a Cambrian explosion in that over the last year. Now they’re building analytics and automation into this platform.”

Bringing network management into HCI

Part of that simplicity includes making it easier to manage networking in HCI. Expanded capabilities for SFS on VxRail include the ability for HCI admins to manage networking switches across VxRail clusters without requiring dedicated networking expertise. SFS now applies across multi-rack VxRail clusters, automating switch configuration for up to six racks in one site. SFS supports from six switches in a two-rack configuration to 14 switches in a six-rack deployment.

Support for Mellanox 100 Gigabit Ethernet PCIe cards help accelerate streaming media and live broadcast functions.

“We believe that automation across the data center is key to fostering operational freedom,” Gil Shneorson, Dell EMC vice president and general manager for VxRail, wrote in a blog with details of today’s upgrades. “As customers expand VxRail clusters across multiple racks, their networking needs expand as well.”

Dell EMC VxRail vs. Nutanix: All about the hypervisor?

IDC lists Dell as the leader in the hyperconverged appliance market, which IDC said hit $1.8 billion in the second quarter of 2019. Dell had 29.2% of the market, well ahead of second-place Nutanix with 14.2%. Cisco was a distant third with 6.2.%

According to Miniman, the difference between Dell EMC and Nutanix often comes down to the hypervisor deployed by the user. VxRail closely supports market leader VMware, but VxRail appliances do not support other hypervisors. Nutanix supports VMware, Microsoft Hyper-V and the Nutanix AHV hypervisors. The Nutanix software stack competes with vSAN.

“Dell and Nutanix are close on feature parity,” Miniman said. “If you’re using VMware, then VxRail is the leading choice because it’s 100% VMware. VxRail is in lockstep with VMware, while Nutanix is obviously not in lockstep with VMware.”

Go to Original Article
Author:

SwiftStack 7 storage upgrade targets AI, machine learning use cases

SwiftStack turned its focus to artificial intelligence, machine learning and big data analytics with a major update to its object- and file-based storage and data management software.

The San Francisco software vendor’s roots lie in the storage, backup and archive of massive amounts of unstructured data on commodity servers running a commercially supported version of OpenStack Swift. But SwiftStack has steadily expanded its reach over the last eight years, and its 7.0 update takes aim at the new scale-out storage and data management architecture the company claims is necessary for AI, machine learning and analytics workloads.

SwiftStack said it worked with customers to design clusters that scale linearly to handle multiple petabytes of data and support throughput of more than 100 GB per second. That allows it to handle workloads such as autonomous vehicle applications that feed data into GPU-based servers.

Marc Staimer, president of Dragon Slayer Consulting, said throughput of 100 GB per second is “really fast” for any type of storage and “incredible” for an object-based system. He said the fastest NVMe system tests at 120 GB per second, but it can scale only to about a petabyte.

“It’s not big enough, and NVMe flash is extremely costly. That doesn’t fit the AI [or machine learning] market,” Staimer said.

This is the second object storage product launched this week with speed not normally associated with object storage. NetApp unveiled an all-flash StorageGrid array Tuesday at its Insight user conference.

Staimer said SwiftStack’s high-throughput “parallel object system” would put the company into competition with parallel file system vendors such as DataDirect Networks, IBM Spectrum Scale and Panasas, but at a much lower cost.

New ProxyFS Edge

SwiftStack 7 plans introduce a new ProxyFS Edge containerized software component next year to give remote applications a local file system mount for data, rather than having to connect through a network file serving protocol such as NFS or SMB. SwiftStack spent about 18 months creating a new API and software stack to extend its ProxyFS to the edge.

Founder and chief product officer Joe Arnold said SwiftStack wanted to utilize the scale-out nature of its storage back end and enable a high number of concurrent connections to go in and out of the system to send data. ProxyFS Edge will allow each cluster node to be relatively stateless and cache data at the edge to minimize latency and improve performance.

SwiftStack 7 will also add 1space File Connector software in November to enable customers that build applications using the S3 or OpenStack Swift object API to access data in their existing file systems. The new File Connector is an extension to the 1space technology that SwiftStack introduced in 2018 to ease data access, migration and searches across public and private clouds. Customers will be able to apply 1space policies to file data to move and protect it.

Arnold said the 1space File Connector could be especially helpful for media companies and customers building software-as-a-service applications that are transitioning from NAS systems to object-based storage.

“Most sources of data produce files today and the ability to store files in object storage, with its greater scalability and cost value, makes the [product] more valuable,” said Randy Kerns, a senior strategist and analyst at Evaluator Group.

Kerns added that SwiftStack’s focus on the developing AI area is a good move. “They have been associated with OpenStack, and that is not perceived to be a positive and colors its use in larger enterprise markets,” he said.

AI architecture

A new SwiftStack AI architecture white paper offers guidance to customers building out systems that use popular AI, machine learning and deep learning frameworks, GPU servers, 100 Gigabit Ethernet networking, and SwiftStack storage software.

“They’ve had a fair amount of success partnering with Nvidia on a lot of the machine learning projects, and their software has always been pretty good at performance — almost like a best-kept secret — especially at scale, with parallel I/O,” said George Crump, president and founder of Storage Switzerland. “The ability to ratchet performance up another level and get the 100 GBs of bandwidth at scale fits perfectly into the machine learning model where you’ve got a lot of nodes and you’re trying to drive a lot of data to the GPUs.”

SwiftStack noted distinct differences between the architectural approaches that customers take with archive use cases versus newer AI or machine learning workloads. An archive customer might use 4U or 5U servers, each equipped with 60 to 90 drives, and 10 Gigabit Ethernet networking. By contrast, one machine learning client clustered a larger number of lower horsepower 1U servers, each with fewer drives and a 100 Gigabit Ethernet network interface card, for high bandwidth, he said.

An optional new SwiftStack Professional Remote Operations (PRO) paid service is now available to help customers monitor and manage SwiftStack production clusters. SwiftStack PRO combines software and professional services.

Go to Original Article
Author:

Data silos and culture lead to data transformation challenges

It’s not as easy as it should be for many users to make full use of data for data analytics and business intelligence use cases, due to a number of data transformation challenges.

Data challenges arise not only in the form of data transformation problems, but also with broader strategic concerns about how data is collected and used.

Culture and data strategy within organizations are key causal factors of data transformation challenges, said Gartner analyst Mike Rollings.

“Making data available in various forms and to the right people at the right time has always been a challenge,” Rollings said. “The bigger barrier to making data available is culture.”

The path to overcoming data challenges is to create a culture of data and fully embrace the idea of being a data-driven enterprise, according to Rollings.

Rollings has been busy recently talking about the challenges of data analytics, including taking part in a session at the Gartner IT Symposium Expo from Oct. 20-24 in Orlando, where he also detailed some of the findings from the Gartner CDO (Chief Data Officer) survey.

Among the key points in the study is that most organizations have not included data and analytics as part of documented corporate strategies.

Making data available in various forms and to the right people at the right time has always been a challenge.
Mike RollingsAnalyst, Gartner

“The primary challenge is that data and data insights are not a central part of business strategy,” Rollings said.

Often, data and data analytics are actually just byproducts of other activities, rather than being the core focus of a formal data-driven architecture, he said. In Rollings’ view, data and analytics should be considered assets that can be measured, managed and monetized.

“When we talk about measuring and monetizing, we’re really saying, do you have an intentional process to even understand what you have,” he said. “And do you have an intentional process to start to evaluate the opportunities that may exist with data, or with analysis that could fundamentally change the business model, customer experience and the way decisions are made.”

Data transformation challenges

The struggle to make the data useful is a key challenge, said Hoshang Chenoy, senior manager of marketing analytics at San Francisco-based LiveRamp, an identity resolution software vendor.

Among other data transformation challenges is that many organizations still have siloed deployments, where data is collected and remains in isolated segments.

“In addition to having siloed data within an organization, I think the biggest challenge for enterprises to make their data ready for analytics are the attempts at pulling in data that has previously never been accessed, whether it’s because the data exists in too many different formats or for privacy and security reasons,” Chenoy said. “It can be a daunting task to start on a data management project but with the right tech, team and tools in place, enterprises should get started sooner rather than later.”

How to address the challenges

With the data warehouse and data lake technologies, the early promise was making it easier to use data.

But despite technology advances, there’s still a long way to go to solving data transformation challenges, said Ed Thompson, CTO of Matillion, a London-based data integration vendor that recently commissioned a survey on data integration problems.

The survey of 200 IT professionals found that 90% of organizations see making data available for insights as a barrier. The study also found a rapid rate of data growth of up to 100% a month at some organizations.

When an executive team starts to get good quality data, what typically comes back is a lot of questions that require more data. The continuous need to ask and answer questions is the cycle that is driving data demand.

“The more data that organizations have, the more insight that they can gain from it, the more they want, and the more they need,” Thompson said.

Go to Original Article
Author:

HPE brings InfoSight AI to SimpliVity HCI

Hewlett Packard Enterprise has made its InfoSight predictive analytics resource management capabilities available on its HPE SimpliVity hyper-converged infrastructure platform.

InfoSight provides capacity utilization reports and forecasts, and sends alerts of possible problems before users run out of capacity. HPE acquired InfoSight when it bought Nimble Storage in March 2017, two months after it acquired early hyper-converged infrastructure (HCI) startup SimpliVity. HPE ported Nimble’s InfoSight to its flagship 3 PAR arrays and it is used in its new Primera storage, as well as the ProLiant servers that SimpliVity runs on.

HPE is also connecting SimpliVity to its StoreOnce backup appliances, allowing customers to move data from SimpliVity nodes to the StoreOnce deduplication back boxes.

HPE disclosed plans to bring InfoSight to SimpliVity in June, and it is now generally available as part of the SimpliVity service agreement. StoreOnce integration with SimpliVity HCI is planned for the first half of 2020.

SimpliVity HCI trails Dell, Nutanix, Cisco

HPE has lagged its major server rivals in HCI sales, particularly Dell. IDC listed SimpliVity as fourth in branded HCI revenue in the second quarter with $83 million, only 2.3% of the market. No. 1 Dell ($533 million) and No. 2 Nutanix ($259 million) combined for nearly half of the total market share, with Cisco third at $114 million and 6.2 % share according to IDC.

HPE’s commitment to SimpliVity has also been questioned by its hedging on HCI products. HPE makes Nutanix technology available as part of its GreenLake as-a-service program, and Nutanix sells its software bundled on HPE ProLiant hardware. HPE customers can also use its servers with VMware vSAN HCI software. And HPE this year launched Nimble Storage dHCI, a disaggregated platform that is not true HCI but competes with HCI products while allowing a greater degree of independent scaling of compute and storage resources. Nimble dHCI is also generally available this week.

HCI ‘comes down to data’

Pittsburgh-based trucking company Pitt Ohio has been a SimpliVity customer since before HPE acquired the HCI pioneer. Systems engineer Justin Brooks said he was familiar with InfoSight as a previous Nimble Storage customer, so he signed up for the beta program on SimpliVity. Brooks said he has used InfoSight since June, and finds it significantly aids him in managing capacity on his 19 HCI nodes used for primary storage and disaster recovery. 

“Most of it comes down to data – how much you’re replicating, and how much data is on there versus what the hypervisor supports,” Brooks said. “The InfoSight intelligence and prediction capabilities are great for SimpliVity, because on any hyper-convergence platform it’s all about scalability. You need to know when to scale out or move things around, so you can plan accordingly. Hyper-converged is not dirt cheap either, especially when it’s all-flash. It’s important to make sure you’re getting your money’s worth out of the resources.”

Brooks said he previously employed “guesstimates and fuzzy math” to predict SimpliVity HCI growth, but InfoSight does those predictions for him now. InfoSight data growth patterns over the past 30-, 60- and 90-day periods.

“You’re always worried how big your data sets are growing, especially on the SQL Server side,” he said. “You don’t get as high efficiency with dedupe and compression on SQL data as with file data. With SimpliVity you have to dig into the CLI and get deep in there, or see what was sent over from the production side or the DR side. InfoSight shows you that data more granularly.”

Brooks said he was concerned about HPE’s plans for SimpliVity when it made the acquisition in 2017, but he’s happy with its commitment. Pitt Ohio took advantage of an HPE buyback program to convert its SimpliVity OmniCubes that used Dell hardware into ProLiant-based SimpliVity nodes. Pitt Ohio had nine SimpliVity nodes before the HPE acquisition, and is up to 19 now. Brooks estimates 98% of his applications run on SimpliVity HCI. The trucking company is a VMware shop and first got into SimpliVity HCI for virtual desktop infrastructure. It has since switched from Cisco UCS servers and a variety of storage, including Dell EMC VMAX and Data Domain and Nimble arrays.

“We had a Frankenblock infrastructure,” Brooks said. “When we needed to refresh hardware, our options were to forklift everything in the data center or get on the hyper-converged route.”

Pitt Ohio now has one SimpliVity cluster for Microsoft SQL Server, another cluster for all other production workloads and a third for QA.

Brooks said he uses SimpliVity for data protection, but is considering adding a Cohesity backup appliance. That is so he can move file data to cheaper storage, and HPE sells Cohesity software on HPE Apollo servers. “We want to get some files off of SimpliVity because I’d rather not use all that flash disk for files,” Brooks said.

Go to Original Article
Author:

ArubaOS-CX upgrade unifies campus, data center networks

Aruba’s latest switching hardware and software unifies network management and analytics across the data center and campus. The approach to modern networking is similar to the one that underpins rival Cisco’s initial success with enterprises upgrading campus infrastructure.

Aruba, a Hewlett Packard Enterprise company, launched this week its most significant upgrade to the two-year-old ArubaOS-CX (AOS-CX) network operating system. With the NOS improvements, Aruba unveiled two series of switches, the stackable CX 6300 and the modular CX 6400. Together, the hardware covers access, aggregation and core uses. 

The latest releases arrive a year after HPE transferred management of its data center networking group to Aruba. The latter company is also responsible for HPE’s FlexNetwork line of switches and software.

The new CX hardware is key to taking AOS-CX to the campus, where companies can take advantage of the software’s advanced features. As modular hardware, the 6400 can act as an aggregation or core switch, while the 6300 drives the access layer of the network where traffic comes from wired or wireless mobile or IoT devices.

For the data center, Aruba has the 8400 switch series  that also run AOS-CX. The hardware marked Aruba’s entry into the data center market, where it has to build credibility.

“Many non-Aruba customers and some Aruba campus customers are likely to take a wait-and-see posture,” said Brad Casemore, an analyst at IDC. 

ArubaOS-CX everywhere  

Nevertheless, having one NOS powering all the switches does make it possible to manage them with the Aruba software that runs on top of AOS-CX. Available software includes products for network management, analytics and access control. 

For the wired and wireless LAN, Aruba has ClearPass, which lets organizations set access policies for groups of IoT and mobile devices; and Central, a cloud-based management console. For the data center, Aruba has HPE SimpliVity, which provides automated switch configurations during deployment of Aruba and HPE switches.

CX switches
Aruba’s new line of CX 6300and 6400 switches

New features in the latest version of ArubaOS-CX include Dynamic Segmentation that lets enterprises assign polices to wired client devices based on port or user role. Other enhancements include support for an Ethernet VPN over VXLAN for data center connectivity.

Also, within the new 10.4 version of AOS-CX, Aruba integrated the Network Analytics Engine (NAE) with Aruba’s NetEdit software for orchestration of multiple switch configurations. NAE is a framework built into AOS-CX that lets enterprises monitor, troubleshoot and collect network data through the use of scripting agents.

Aruba vs. Cisco

How well Aruba’s unification strategy for networking can compete with Cisco’s remains to be seen. The latter company has had significant success with the Catalyst 9000 campus switching line introduced in 2017 with Cisco’s DNA Center management console. Some organizations use the DNA product in data center networking.

In the first quarter of 2019, Cisco’s success with the Catalyst 9000 boosted  its revenue share of the campus switching market by 5 points, according to the research firm Dell’Oro Group. During the same quarter, the combined revenue of the other vendors, which included HPE, declined.

In September, Gartner listed Cisco and Aruba as the leaders in the research firm’s Magic Quadrant for Wired and Wireless LAN Access Infrastructure.

Competition is fierce in the campus infrastructure market because enterprises are just starting to upgrade networks. Driving the current upgrade cycle is the switch to Wi-Fi 6 — the next-generation wireless standard that can support more devices than the present technology.

Wi-Fi 6 lets enterprises add to their networks IoT devices ranging from IP telephones and surveillance cameras to medical devices and handheld computers. The latter is used in warehouses and on the factory floor.

That transition will drive companies to deploy aggregation and access switches with faster port speeds and PoE ports to power wired IoT gear.

Enterprises skeptical of cross-domain networking

Aruba, Cisco and other networking vendors pushing a unified campus and data center haven’t convinced many enterprises to head in that direction, IDC analyst Brandon Butler said. Adopting that cross-domain technology would require significant changes in current operations, which typically have separate IT teams responsible for the campus and the data center.

IDC has not spoken to many enterprises that have centralized management across domains, Butler said. “This idea that you’re going to have a single pane of glass across the data center and the campus and out to the edge, I just don’t know if the industry is quite there yet.”

Meanwhile, Aruba’s focus on its CX portfolio has left some industry observers wondering whether it would diminish the development of FlexNetwork switches and software. 

However, Michael Dickman, VP of Aruba product line management, said the company plans to fully support its FlexNetwork architecture “in parallel” with the CX portfolio.

Go to Original Article
Author: