Tag Archives: Enterprise

Cloud misconfigurations can be caused by too many admins

When it comes to cloud security, enterprise employees can be their own worst enemy, especially when organizations stray too far from least-privilege models of access.

Data exposures have been a constant topic of news recently — often blamed on cloud misconfigurations — and have led to voter records, Verizon customer data and even army secrets being publicly available in cloud storage.

In a Q&A, BetterCloud CEO and founder David Politis discussed why SaaS security has become such big news and how enterprises can take control of these cloud misconfigurations in order to protect data.

Editor’s note: This conversation has been edited for length and clarity.

There have been quite a few stories recently about cloud misconfigurations leading to exposures of data. Do you think this is a new issue or just something that is becoming more visible now?

David Politis: This is an issue that has been around really since people started adopting SaaS applications. But it’s only coming out now because, in a lot of cases, the misconfigurations are not identified until it’s too late. In most cases, business configurations were in place when the stock application was deployed, or they were in place when the setting was changed years ago or six months ago, and it’s not until some high-profile exposure happens that the organization starts paying attention to it.

David Politis, CEO, BetterCloudDavid Politis

We’ve actually seen this recently. We had a couple of customers that we’re talking to for, in one case, three years. And we told them three years ago, ‘You’re going to have issue X, Y and Z down the line, because you have too many administrators and because you have this issue with groups. And for three years, it has been living dormant, essentially. And then, all of a sudden, they had an issue where all their groups got exposed to all the employees in the company. It’s a 10,000-person company, where every single employee in the entire company could read every single email distribution list.

Similarly, another company that we’ve talked to for a year came to us three weeks ago and said, ‘I know you told us when we’re going to have these problems, where we just had one of the super admins that should not have been a super admin incorrectly delete about a third of our company’ — they’re about 3,000-person company — ‘and a third of the company just was left without email, without documents and without calendars and thought they got fired.’

A thousand people, in a matter of minutes, thought they got fired, because they had no access anything. And they had to go and restore that app. Fifteen minutes of downtime for 1,000 people is a lot of confusion.

We’ve seen these types of incidences, and we’re seeing it in these environments. This is why we started the company almost seven years ago now. But only now has the adoption of these SaaS applications reached critical mass enough to where these problems are happening at scale and people are reporting it publicly.

You mention different SaaS security issues that can arise from cloud misconfigurations. Are these data exposure stories overshadowing bigger issues?

Politis: It’s more of the inadvertent piece is what makes this so challenging. And this is not malicious. There are malicious actors, but a lot of these situations are not malicious. It’s misconfiguration or just a just a general mistake that someone made. Even deleting the users is just a result of having too many administrators, which is a result of not understanding how to configure the applications to follow a least-privilege model.

I think, even if it’s a mistake, the kind of data that can be exposed is the most sensitive data, because we’ve hit the tipping point in how SaaS applications are being used. The cloud, in general, is being used as a system of record. If we go back maybe five years ago, six years ago, I’m not sure we’re at the point where cloud was being trusted as a system of record. It was kind of secondary. You could put some stuff there, maybe some design files, but now you have your [human resources] files.

Recently, we did a security assessment for a customer, and what we found was that all the HR files that they had lived in a public folder in one of their cloud storage systems. And it was literally all their HR files, by employee. That was this configuration that was definitely not malicious, and that’s as bad as it gets. We’re talking about Social Security numbers. We were finding documents such as background checks on employees that were publicly available files. And if you knew how to go find them, you could pull that up.

That, I’d argue, is worse than people’s email being deleted for 15 minutes — and, again, completely by mistake. We spoke to the company, and the person in charge of HR was just not very familiar with these cloud-based systems. And they just misconfigured something at the folder level, and then all the files that they were adding to the folder were becoming publicly available. And so I think it’s more dangerous there, because you’re not even looking for a bad actor. It’s just happening. It’s happening day in, day out, which I think is harder to catch actually.

Should all enterprises assume there is a cloud misconfiguration somewhere? How difficult is it to find these issues?

Politis: I can say from our experience that are nine out of 10 environments that we go into — and it doesn’t matter the size of the organization — have a major, critical misconfiguration somewhere in their environment. And it is possible, in most cases, to find the misconfiguration, but it’s a little bit like finding a needle in a haystack. It requires a lot of time to go through, because the only way to do it is to go page by page in the admin console; it’s to click on every setting to look at every group, look at every channel and look at every folder. And so unless you’re doing it programmatically, right now, there are not many [other] ways to do it.

This is self-serving, but this is why we built BetterCloud is to identify those blind spots. That’s because there’s a real need. When we went to look at these environments and we started logging into Salesforce and Slack and Dropbox and Google, it could take you months to go through an environment of couple hundred employees and just check all the configurations and all the different areas, because there [are] so many places where that misconfiguration can be a problem.

The way that people have to do it today is do it manually. And it can take a very long period of time [depending] on how big an organization is, how long they’ve been using the SaaS applications, how much they’ve adopted cloud in general, and the sprawl of data that they have to manage and, more importantly, the sprawl of entitlement, configuration settings, permissions across all the SaaS.

And we’re seeing a large portion of that is not even IT’s fault. The misconfigurations may predate that IT organization in many cases, because the SaaS application has been around for longer than that IT organization or that IT leader.

In many cases, it may be the end users who are misconfiguring, because they have a lot of control over these applications. It could be that it started a shadow IT, and it was configured by a shadow IT in a certain way. When the apps are taken over by the IT organization, a lot of that cleanup of the configuration isn’t done, and so it doesn’t fit within the same policies that IT has.

We also have a lot of customers where the number of admins that they have is crazy, because sales operations were the ones responsible for that and, generally speaking, it’s easier to make everyone an admin and let them make their own changes, let them do all of that. But when IT openly takes over the security and management of Salesforce, the work required to go find all the misconfiguration is really hard. That goes for Dropbox, Slack and anything that starts as shadow IT; you’re going to have those problems.

OpenText OT2 hybrid cloud EIM platform debuts

TORONTO — OpenText OT2, a new hybrid cloud-on-premises enterprise information management platform, brings self-service SaaS appdev environment to customers that crave the cloud’s flexibility and economy but often must keep some data at home, typically out of regulatory concerns or legacy application tethers.

Hybrid cloud EIM deployments — and apps connecting them — were previously feasible with OpenText on its AppWorks platform. But the company promises that the combination of OpenText OT2’s unified data model, updated interface and modernized, developer-friendly environment will make it more straightforward and faster to design and update applications.

That will cut development time for custom apps from weeks to hours in some cases, Muhi Majzoub, OpenText executive vice president of engineering and IT, said in an interview.

“OT2 simplifies for our customers how they invest and make decisions in taking some of their on-premises workflows and [port] them into a hybrid model or SaaS model into the cloud,” Majzoub said.

He added that life sciences and financial customers have taken a particular interest in that process to streamline records processes — and add AI and analytics behind them.

OpenText Enterprise World 2018 stock pic
OpenText Enterprise World 2018

New tools from new acquisitions

Some of the new OpenText OT2 microservices that can be deployed with low-code appdev or native programming reflect OpenText acquisitions from the last couple of years such as Covisint and Guidance Software. The short list of tools from these includes analytics, ID management and IoT endpoint security.

OpenText OT2 debuted at the company’s annual OpenText Enterprise World 2018 user conference, with Mazjoub planning to demonstrate the first OpenText OT2 apps, some created by customers and others by OpenText employees.

The company plans to keep OpenText OT2 tightly integrated with the current release of its main suite, OpenText 16, using quarterly updates. OpenText 16 connects to numerous associated applications and services, many of which have thousands of customers, such as document management platforms Documentum and Core, as well as software designed for specific vertical markets such as legal and manufacturing.

OpenText CEO and CTO Mark Barrenechea
OpenText CEO and CTO Mark Barrenechea giving keynote at OpenText Enterprise World 2018 in Toronto

Widening the market

OpenText OT2 apps will also be available for partners to run on Amazon AWS, Microsoft Azure and Google clouds.

It will be interesting to see if enterprise technology buyers will need, for example, OpenText Magellan AI apps set up specifically for content, said Alan Lepofsky, Constellation Research vice president and principal analyst.

Also remaining to be seen will be how the new system will compete against other vendors’ products, such as those from ABBYY that also offer content-specific AI tools.

“It comes down to: Will customers want to use a general AI platform like Azure, Google, IBM or AWS?” Lepofsky said. “Will the native AI functionality from OpenText compare and keep up? What will be the draw for new customers?”

The case for cloud storage as a service at Partners

Partners HealthCare relies on its enterprise research infrastructure and services group, or ERIS, to provide an essential service: storing, securing and enabling access to the data files that researchers need to do their work.

To do that, ERIS stood up a large network providing up to 50 TB of storage, so the research departments could consolidate their network drives, while also managing access to those files based on a permission system.

But researchers were contending with growing demands to better secure data and track access, said Brent Richter, director of ERIS at the nonprofit Boston-based healthcare system. Federal regulations and state laws, as well as standards and requirements imposed by the companies and institutions working with Partners, required increasing amounts of access controls, auditing capabilities and security layers.

That put pressure on ERIS to devise a system that could better meet those heightened healthcare privacy and security requirements.

“We were thinking about how do we get audit controls, full backup and high availability built into a file storage system that can be used at the endpoint and that still carries the nested permissions that can be shared across the workgroups within our firewall,” he explained.

Hybrid cloud storage as a service

At the time, ERIS was devising security plans based on the various requirements established by the different contracts and research projects, filling out paperwork to document those plans and performing time-intensive audits.

It was then that ERIS explored ClearSky Data. The cloud-storage-as-a-service provider was already being used by another IT unit within Partners for block storage; ERIS decided six months ago to pilot the ClearSky Data platform.

“They’re delivering a network service in our data center that’s relatively small; it has very fast storage inside of it that provides that cache, or staging area, for files that our users are mapping to their endpoints,” Richter explained.

From there, automation and software systems from ClearSky Data take those files and move them to its local data center, which is in Boston. “It replicates the data there, and it also keeps the server in our data center light. [ClearSky Data] has all the files on it, but not all the data in the files on it; it keeps what our users need when they’re using it.”

Essentially, ClearSky Data delivers on-demand primary storage, off-site backup and disaster recovery as a single service, he said.

All this, however, is invisible to the end users, he added. The researchers accessing data stored on the ClearSky Data platform, as well as the one built by ERIS, do not notice the differences in the technologies as they go about their usual work.

ClearSky benefits for Partners

ERIS’ decision to move to ClearSky Data’s fully managed service delivered several specific benefits, Richter said.

He said the new approach reduced the system’s on-premises storage footprint, while accelerating a hybrid cloud strategy. It delivered high performance, as well as more automated security and privacy controls. And it offered more data protection and disaster recovery capabilities, as well as more agility and elasticity.

Richter said buying the capabilities also helped ERIS to stay focused on its mission of delivering the technologies that enable the researchers.

“We could design and engineer something ourselves, but at the end of the day, we’re service providers. We want to provide our service with all the needed security so our users would just be able to leverage it, so they wouldn’t have to figure out whether it met the requirements on this contract or another,” Richter said.

He noted, too, that the decision to go with a hybrid cloud storage-as-a-service approach allowed ERIS to focus on activities that differentiate the Partners research community, such as supporting its data science efforts.

“It allows us to focus on our mission, which is providing IT products and services that enable discovery and research,” he added.

Pros and cons of IaaS platform

Partners’ storage-as-a-service strategy fits into the broader IaaS market, which has traditionally been broken into two parts: compute and storage, said Naveen Chhabra, a senior analyst serving infrastructure and operations professionals at Forrester Research Inc.

[Cloud storage as a service] allows us to focus on our mission, which is providing IT products and services that enable discovery and research.
Brent Richterdirector of ERIS at Partners HealthCare

In that light, ClearSky Data is one of many providers offering not just cloud storage, but the other infrastructure layers — and, indeed, the whole ecosystem — needed by enterprise IT departments, with AWS, IBM and Google being among the biggest vendors in the space, Chhabra said.

As for the cloud-storage-as-a-service approach adopted by Partners, Chhabra said it can offer enterprise IT departments flexibility, scalability and faster time to market — the benefits that traditionally come with cloud. Additionally, it can help enterprise IT move more of their workloads to the cloud.

There are potential drawbacks in a hybrid cloud storage-as-a-service setup, however, Chhabra said. Applying and enforcing access management policies in an environment where there are both on-premises and IaaS platforms can be challenging for IT, especially as deployment size grows. And while implementation of cloud-storage-as-a-service platforms, as well as IaaS in general, isn’t particularly challenging from a technology standpoint, the movement of applications on the new platform may not be as seamless or frictionless as promoted.

“The storage may not be as easily consumable by on-prem applications. [For example,] if you have an application running on-prem and it tries to consume the storage, there could be an integration challenge because of different standards,” he said.

IaaS may also be more expensive than keeping everything on premises, he said, adding that the higher costs aren’t usually significant enough to outweigh the benefits. “It may be fractionally costlier, and the customer may care about it, but not that much,” he said.

Competitive advantage

ERIS’ pilot phase with ClearSky Data involves standing up a Linux-based file service, as well as a Windows-based file service.

Because ERIS uses a chargeback system, Richter said the research groups his team serves can opt to use the older internal system — slightly less expensive — or they can opt to use ClearSky Data’s infrastructure.

“For those groups that have these contracts with much higher data and security controls than our system can provide, they now have an option that fulfills that need,” Richter said.

That itself provides Partners a boost in the competitive research market, he added.

“For our internal customers who have these contracts, they then won’t have to spend a month auditing their own systems to comply with an external auditor that these companies bring as part of the sponsored research before you even get the contract,” Richter said. “A lot of these departments are audited to make sure they have a base level [of security and compliance], which is quite high. So, if you have that in place already, that gives you a competitive advantage.”

Ctera Networks adds Dell and HPE gateway appliance options

Ctera Networks is aiming to move its file storage services into the enterprise through new partnerships with Dell and Hewlett Packard Enterprise.

The partnerships, launched in June, allow Ctera to bundle its Enterprise File Services Platform on more-powerful servers with greater storage capacity. Ctera previously sold its branded cloud gateway appliances on generic white box hardware at a maximum raw capacity of 32 TB. The new Ctera HC Series Edge Filers include the HC1200 model offering as much as 96 TB, the HC400 with as much as 32 TB and the HC400E at 16 TB on Dell or HPE servers with 3.5-inch SATA HDDs.

The gateway appliances bundle Ctera’s file services that provide users with access to files on premises and transfers colder data to cloud-based, scale-out object storage at the customer site or in public clouds.

The new models include the Petach Tikvah, Israel, company’s first all-flash appliances. The HC1200 is equipped with 3.84 TB SATA SSDs and offers a maximum raw capacity of 46.08 TB. The HC400 tops out at 15.36 TB. The all-flash models use HPE hardware with 2.5-inch read-intensive SSDs that carry a three-year warranty.

Ctera Networks doesn’t sell appliances with a mix of HDDs and SSDs. The HC400 and HC400E are single rack-unit systems with four drive bays, and the HC1200 is a 2U device with 12 drive bays.

“In the past, we had 32 TB of storage, and it would replace a certain size of NAS device. With this one, we can replace a large series of NAS devices with a single device,” Ctera Networks CEO Liran Eshel said.

Ctera HC Series Edge Filers
New Ctera HC Series Edge Filers include Ctera Networks HC1200 (top) and HC400 file storage.

New Ctera Networks appliances enable multiple VMs

The new, more-powerful HC Series Edge Filers will enable customers to run multiple VMware virtual machines (VMs), applications and storage on the same device, Eshel said. The HC Series supports 10 Gigabit Ethernet networking with fiber and copper cabling options.

“Our earlier generation was just a cloud storage gateway. It didn’t do other things,” Eshel said. “With this version, we actually have convergence — multiple applications in the same appliance. Basically, we’re providing top-of-the-line servers with global support.”

The Dell and HPE partnerships will let Ctera Networks offer on-site support within four hours, as opposed to the next-business-day service it provided in the past. Ctera will take the first call, Eshel said, and be responsible for the customer ticket. If it’s a hardware issue, Ctera will dispatch partner-affiliated engineers to address the problem.

Using Dell and HPE servers enables worldwide logistics and support, which is especially helpful for users with global operations.

“It was challenging to do that with white box manufacturing,” Eshel said.

Software-defined storage vendors require these types of partnerships to sell into the enterprise, said Steven Hill, a senior analyst at 451 Research.

“In spite of the increasingly software-based storage model, we find that many customers still prefer to buy their storage as pre-integrated appliances, often based on hardware from their current vendor of choice,” Hill wrote in an e-mail. “This guarantees full hardware compatibility and provides a streamlined path for service and support, as well as compatibility with an existing infrastructure and management platform.”

Cloud object storage options

The Ctera product works with on-premises object storage from Caringo, Cloudian, DataDirect Networks, Dell EMC, Hitachi, HPE, IBM, NetApp, Scality and SwiftStack. It also supports Amazon Web Services, Google Cloud Platform, IBM Cloud, Microsoft Azure, Oracle and Verizon public clouds. Ctera has reseller agreements with HPE and IBM.

Eshel said one multinational customer, WPP, has already rolled out the new appliances in production for use with IBM Cloud.

The list price for the new Ctera HC Series starts at $10,000. Ctera also continues to sell its EC Series appliances on white box hardware. Customers have the option to buy the hardware pre-integrated with the Ctera software or purchase virtual gateway software that they can install on server hypervisors on premises or in Amazon or Azure public clouds.

HPE aims new SimpliVity HCI at edge computing

Hewlett Packard Enterprise has introduced a compact hyper-converged infrastructure system destined for running IoT applications at the network’s edge.

HPE unveiled the SimpliVity 2600 this week, calling the device the “first software-optimized offering” in the SimpliVity HCI line. The 2U system is initially built to run a virtual desktop system, but its size and computing power makes it “ideal for edge computing applications,” said Lee Doyle, the principal analyst at Doyle Research, based in Wellesley, Mass.

Thomas Goepel, the director of product management at HPE, said the company would eventually market the SimpliVity 2600 for IoT and general-purpose applications that require a smallish system with a dense virtualized environment.

As virtual desktop infrastructure, the SimpliVity 2600 provides a scale-out architecture that lets companies increase compute, memory and storage as needed. The system also has a built-in backup and disaster recovery for desktop operations.

Intel Xeon processors with 22 cores each power the SimpliVity 2600, which supports up to 768 GB of memory. Hardware features include a redundant power supply, hot-pluggable solid-state drives, cluster expansion without downtime and an integrated storage controller with a battery-backed cache. The system also has a 10 GbE network interface card.

HPE’s planned Plexxi integration

HPE’s SimpliVity HCI portfolio stems from last year’s $650 million purchase of HCI vendor SimpliVity Corp. The acquired company’s technology for data deduplication and compression was a significant attraction for HPE, analysts said.

HPE has said it will eventually incorporate in its SimpliVity HCI systems the hyper-converged networking (HCN) technology of Plexxi. HPE announced its acquisition of Plexxi in May but did not disclose financial details.

“[An] HPE SimpliVity with Plexxi solution is on the roadmap,” Goepel said. He did not provide a timetable.

Plexxi’s HCN software enables a software-based networking fabric that runs on Broadcom-powered white box switches. Companies can use VMware’s vCenter dashboard to orchestrate virtual machines in a Plexxi HCI system. Plexxi software can also detect and monitor VMware NSX components attached to the fabric.

IT infrastructure management software learns analytics tricks

IT infrastructure management software has taken on a distinctly analytical flavor, as enterprise IT pros struggle to keep up with the rapid pace of DevOps and technology change.

Enterprise IT vendors that weren’t founded with AIOps pedigrees have added data-driven capabilities to their software in 2018, while startups focused on AI features have turned heads, even among traditional enterprise companies. IT pros disagree on the ultimate extent of AI’s IT ops automation takeover. But IT infrastructure management software that taps data analytics for decision-making has replaced tribal knowledge and manual intervention at most companies.

For example, Dolby Laboratories, a sound system manufacturer based in San Francisco, replaced IT monitoring tools from multiple vendors with OpsRamp’s data-driven IT ops automation software, even though Dolby is wary of the industry’s AIOps buzz. OpsRamp monitors servers and network devices under one interface, and it can automatically discover network configuration information, such as subnets and devices attached to the network.

“You can very easily get a system into the monitoring workflow, whereas a technician with his own separate monitoring system might not take the last step to monitor something, and you have a problem when something goes down,” said Thomas Wong, Dolby’s senior director of enterprise applications. OpsRamp’s monitoring software alerts are based on thresholds, but they also suggest remediation actions.

Dolby’s “killer app” for OpsRamp’s IT ops automation is to patch servers and network devices, replacing manual procedures that required patches to be downloaded separately and identified by a human as critical, Wong said.

Still, Wong said Dolby will avoid OpsRamp version 5.0 for now, which introduced new AIOps capabilities in June 2018.

“We’re staying away from all of that,” he said. “I think it’s just the buzz right now.”

Data infiltrates IT infrastructure management software

While some users remain cautious or even skeptical of AIOps, IT infrastructure management software of every description — from container orchestration tools to IT monitoring and incident response utilities — now offer some form of analytics-driven automation. That ubiquity indicates at least some user demand, and IT pros everywhere must grapple with AIOps, as tools they already use add AI and analytics features.

PagerDuty, for example, has concentrated on data analytics and AI additions to its IT incident response software in 2017 and 2018. A new AI feature added in June 2018, Event Intelligence, identifies patterns in human incident remediation behavior and uses those patterns to understand service dependencies and communicate incident response suggestions to operators when new incidents occur.

“The best predictor of what someone will do in the future is what they actually do, not what they think they will do,” said Rachel Obstler, vice president of products at PagerDuty, based in San Francisco. “If a person sees five alerts and an hour later selects them together and says, ‘Resolve all,’ that tells us those things are all related better than looking at the alert payloads or the times they were delivered.”

PagerDuty users are intrigued by the new feature, but skeptical about IT ops automation tools’ reach into automated incident remediation based on such data.

“I can better understand the impact [of incidents] on our organization, where I need to make investments and why, and I like that it’s much more data-driven than it used to be,” said Andy Domeier, director of technology operations at SPS Commerce, a communications network for supply chain and logistics businesses based in Minneapolis.

SPS has built webhook integrations between PagerDuty alerts and AWS Lambda functions to attach documentation to each alert, which saves time for teams to search a company wiki for information on how to resolve an alert. This integration also facilitates delivery of recent change information.

“But if I want to do something meaningful in response to alerts, I have to be inside my network,” Domeier said. “I don’t think PagerDuty would be able to do that kind of thing at scale, because everyone’s environment is different.”

From IT ops automation to AIOps

AIOps is far from mainstream, but more companies aspire to full data-driven IT ops automation. In TechTarget’s 2018 IT Priorities Survey, nearly as many people said they would adopt some form of AI (13.7%) as would embrace DevOps (14.5%). And IT infrastructure management software vendors have wasted no time to serve up AIOps features, as AI and machine learning buzz crests in the market.

Dynatrace’s IT monitoring tool performs predictive analytics and issues warnings to IT operators in shops such as Barbri, which offers legal bar review courses in Dallas.

“We just had critical performance issues surface recently that Dynatrace warned us about,” said Mark Kaplan, IT director at Barbri, which has used Dynatrace for four years. “We were able to react before our site went down.”

[AI and neural networks are] just an evolution of the standard statistics we’ve always used, and that evolution is much more human than most people believe.
Dennis Curryexecutive director and deputy CTO, Konica Minolta

The monitoring vendor released Dynatrace Artificial Virtual Intelligence System, or DAVIS, an AI-powered digital virtual assistant for IT operators, in early 2017. And Barbri now uses it frequently for IT root-cause analysis and incident response. Barbri will also evaluate Dynatrace log analytics features to possibly replace Splunk.

Kaplan has already grown accustomed to daily reports from DAVIS and wants it to do more, such as add a voice interface similar to Amazon Echo’s Alexa and automated incident response.

“We can already get to the point of self-remediation if we make the proper scripts in a convoluted setup,” he said. “But we see something smoother coming in the future.”

Since Barbri rolled out DAVIS, IT ops pros have embraced a more strategic role as infrastructure architects, rather than put out fires. Nevertheless, enterprises still insist on control. Even as AIOps tools push the boundaries of machine control over other machines, unattended AI remains a distant concept for IT infrastructure management software, if it ever becomes reality.

“No one’s talking about letting AI take over completely,” Kaplan said. “Then, you end up in a HAL 9000 situation.”

The future of AI looks very human

Konica Minolta Inc., a Japanese digital systems manufacturing company, teamed up with AIOps startup ScienceLogic for a new office printer product, called Workplace Hub, which can also deliver IT management services for SMB customers. ScienceLogic’s AIOps software will be embedded inside Workplace Hub and used on the back end at Konica Minolta to manage services for customers.

But AI will only be as valuable as the human decisions it enables, said Dennis Curry, executive director and deputy CTO at Konica Minolta. He, too, is skeptical about the idea of AI that functions unattended by humans and instead sees that AI will augment human intelligence both inside and outside of IT environments.

“AI is not a sudden invention — I worked in the mid-1990s for NATO on AI and neural networks, but there wasn’t a digital environment then where it could really flourish, and we have that now,” Curry said. “It’s just an evolution of the standard statistics we’ve always used, and that evolution is much more human than most people believe.”

Microsoft: Enterprises need to stop fighting cloud services adoption

BOSTON — Enterprise security teams’ zero-trust mindset is often a good thing. But when it comes to cloud services adoption, Microsoft argued it may be doing more harm than good.

During her session this week at the 2018 Identiverse conference, “The Cake is Not a Lie,” Laura Hunter, principal program manager at Microsoft, said security professionals need to change their default reactions when their organizations want to introduce new cloud applications. She said she finds security professionals go through something similar to the five stages of grief when their organization begins the process of cloud services adoption.

“I put a portion of the blame at the feet of the cloud service provider,” Hunter said regarding why the reactions are so intense. “When [cloud service providers] talk to our customers, we maybe, historically, lay it on a little thick” and promise perfectly secured cloud environments.

The bigger problem, she noted, is security professionals are naturally predisposed to be skeptical of anything “new and shiny,” like the perfectly secured cloud environments.

“It’s in our nature, as security professionals, when we hear the stories of happy, shiny, flowers and goodness, to immediately go ‘shields up,'” she said. “In some ways, this is good. But, in some ways, it can actually work to our detriment.”

Laura Hunter of Microsoft discusses cloud services adoption during Identiverse 2018.
Microsoft’s Laura Hunter in her Identiverse 2018 session, ‘The Cake is not a Lie’

According to Hunter, security professionals work in a zero-trust mindset, especially when it comes to cloud services adoption. As a result, when another unit in the organization proposes using a cloud application or service, security professionals have an automatic answer of “no,” because it would be bad for security.

However, if the IT department says no to a service that a business department or employee needs to do their job effectively, that business department or employee will most likely go out and procure the service on their own anyway.

Because of this, Hunter questioned the zero-trust mindset of security professionals.

“Is this default answer of ‘Trust no one’… really acting in our organization’s best interest?” she asked.

Maintaining that ‘hard-line no’ is actually making your organization less secure.
Laura Hunterprincipal program manager, Microsoft

By maintaining a “hard-line no” as the default answer every time cloud services adoption is brought up, “we are removing ourselves from the conversation,” Hunter said. “We are removing ourselves from conversations our businesses are having whether we want them to or not.”

When it comes down to business need versus security, business need is always going to win, she said, and then you end up with shadow cloud IT that the security team has no control over.

“It’s going to happen anyway, and the only thing you’ve done by maintaining that hard-line approach is ensure that it happens without you at the table, ensure that it’s happening without you as part of the conversation” about how to monitor the cloud applications, apply controls and policies, and maintain organizational compliance, she said.

“Maintaining that ‘hard-line no’ is actually making your organization less secure,” Hunter said.

Security professionals should instead remain open-minded and “have a conversation that’s a question,” rather than always saying no. The solution is to embrace the use of cloud applications in the enterprise and work to find ways to make them more secure. Better yet, use the cloud services to improve enterprise security.

“Let’s use the cloud for the good of our organization.”

Plattner: ‘Speed is the essence’ for SAP HANA Data Management Suite

The intelligent enterprise must be built on speed — speed like you get from the in-memory processing of the HANA database, which is also the foundation of the SAP HANA Data Management Suite, SAP co-founder Hasso Plattner said in his keynote address at  Sapphire Now.

The vendor announced SAP HANA Data Management Suite at its annual user conference, which was held this month in Orlando, Fla. It’s an amalgam of SAP applications intended to allow companies to get a better grip on the various data sources flowing through the organization and extract business value, according to the company.

SAP HANA Data Management Suite consists of the SAP HANA database; SAP Data Hub, a data governance and orchestration platform; SAP Cloud Platform Big Data Services, a large-scale data processing platform based on Hadoop and Spark; and SAP Enterprise Architecture Designer, a collaborative cloud application that can let anyone in an organization participate in planning, designing and governing data analytics applications.

“When we talk about human intelligence, it’s directly correlated to speed. Every intelligence test in the world is based on how fast you can solve certain [tests]. Speed is the essence. And the faster you can do something, the faster you can simulate, the higher the probability that you reach a decent result,” Plattner said. “There are data pipelines, governance and workflows from one HANA system, and we can access all other HANA systems and even non-HANA systems. This is very important when we think about Leonardo projects we build outside the system, but can access any kind of data objects or services inside the system.”

Plattner outlined five innovations that are key to SAP HANA Data Management Suite:

  • data pipelines that allow access to data at its origin, which improves security and reduces management;
  • text and search, including natural language processing of unstructured data from sources like Hadoop;
  • spatial and graph functions that can combine business data with geographic and streaming data to enable much faster geo-enabled applications;
  • data anonymization that can be done on the fly, allowing for applications that can be in compliance with the General Data Protection Regulation in near-real time; and
  • persistent memory, which keeps data in nonvolatile storage that can greatly reduce the amount of time it takes to reload data in the event of an outage.

“Our objective is not only that we connect everything with everything else, but that we can develop enhancements to products without touching the products. We have to reduce the maintenance efforts,” he said.

Not new, but bundle may help

Our objective is not only that we connect everything with everything else, but that we can develop enhancements to products without touching the products. We have to reduce the maintenance efforts.
Hasso Plattnerco-founder and chairman of the supervisory board at SAP

SAP HANA Data Management Suite is not really new, but the bundle may help SAP to market data management applications, said Holger Mueller, vice president and principal analyst at Constellation Research.

“It’s a combination of stuff they’ve already announced. So, it’s nothing new, but it’s really the glue to keep the new SAP together. They also want to simplify things; [Plattner] always thinks there are too many products and confusion with names and so on. So, why not put them together?” Mueller said. “In the field, they’re probably going to sell more Data Management Suite and Data Hub now that it’s bundled with HANA. They can throw it in together, so it really is just packaging. It’s a new product, so there’s only a few customers out there. But from what I see, there are some early projects, and it’s going.”

Embedding analytics into S/4HANA Cloud

Analytics is a major focus area for SAP, and the company announced it will begin to embed SAP Analytics Cloud functions directly in S/4HANA Cloud, the SaaS version of its newest ERP platform, allowing organizations to plan and run analytics in one system. SAP Analytics Cloud provides analytics for business intelligence — including SAP BusinessObjects, SAP Business Planning and Consolidation (BPC) and SAP Business Warehouse (BW) capabilities in one cloud-based platform, said Mike Flannagan, senior vice president of SAP Analytics and SAP Leonardo. The analytics functions are not just available in the data layer of an application.

“We’re not just providing access to data that sits on premises; we’re also supporting things like doing planning in Analytics Cloud that allows you to write that to BPC. So, you can continue to do all of your systems of record in BPC, but access the information and do the planning in Analytics Cloud, which has advanced features and a more modern interface,” Flannagan said. “Just having access to the data is one thing, but our customers haven’t invested in BusinessObjects only because of the data that’s there. It’s also the semantic layer where they’ve made a significant investment — years of investment. So, being able to take advantage of that investment while using cloud functionality is really important.”

SAP faces a crowded competitive landscape on the analytics front, but the expansion of the SAP Analytics Cloud portfolio may help differentiate it, said Doug Henschen, vice president and principal analyst at Constellation Research.

“SAP Analytics Cloud is roughly 3 years old, but it was fairly late to the market — such that many large customers had already taken other paths to self-service analytics like Tableau, Microsoft PowerBI and Qlik, or to cloud-based planning with Anaplan, Adaptive Planning or Host Analytics. And there’s plenty of competition on the predictive analytics front, as well,” Henschen said.

“Standardization on SAC [SAP Analytics Cloud] across the SAP portfolio is a good move on SAP’s part that could help sway customers to give it a second look,” Henschen continued. “But I think SAP has to continue to deepen the capabilities on the self-service analytics, planning and predictive fronts to stand up against best-of-breed competitors in each of these niches.”

HPE partners urged to tackle the ‘Super Six’ market opportunities

LAS VEGAS — At the HPE Global Partner Summit 2018, Hewlett Packard Enterprise laid out six market opportunities it wants to jointly pursue with channel partners.

HPE executives said half of the “Super Six” opportunities are ripe for HPE partners to immediately capture and half involve more long-term strategies and investments. The short-term opportunities include Gen10 servers, blade servers and all-flash arrays, HPE said. The forward-looking opportunities center on everything as a service, software-defined infrastructure and the intelligent edge.

“What the Super Six represents are six huge transitions that we see happening right now in the marketplace,” said Phil Davis, chief sales officer at HPE, at this week’s Summit.

He said HPE has been busy transforming its portfolio to align with Super Six opportunities, which he said collectively makes up a $65 billion market. Additionally, the company has received clear signals from its customer base the Super Six represents areas to double-down on. “The accelerated growth rates that we are seeing in these areas absolutely prove [the Super Six] is where our customers see … value,” he said.

Short-term opportunities for HPE partners

Davis described HPE’s immediate Super Six opportunities as having about a two- or three-year window that partners can exploit if they act fast.

Gen10 transitions: Customer refreshes to HPE’s Gen10 servers amount to about a $20 billion opportunity, Davis said. He noted that HPE partners that lead with HPE OneView, the vendor’s converged-infrastructure management platform, have “a great opportunity to refresh our own, as well as competitive environments.”

OneView “delivers a substantially differentiated experience for our customers in terms of ease of deployment of infrastructure in terms of automation of routine tasks, in terms of being able to take costs out,” he said. He noted that HPE recently shipped its one millionth OneView license.

Blades: Davis asserted that HPE essentially invented the server blade category and has led in it for every quarter.

It is critical that we lead with everything as a service [and] we lead with a consumption model with HPE GreenLake because it is what customers want.
Phil Davischief sales officer, HPE

The second strongest player in the market, Cisco, with its Unified Computing System (UCS) technology, is decreasing its focus on the blades market, he said. “If you watch what is going on with [Cisco’s] investments, it really seems like they are pulling back,” Davis said. “We haven’t seen the same pace of innovation for them and it looks like they are doubling-down on security and other places. It looks like Cisco isn’t looking to invest in the future.”

He said the server blades market represents a $6 billion opportunity for HPE and its partners and it’s “all coming up for grabs right now.”

Flash storage: As a $7.7 billion market growing at 15%, the transition to all-flash storage is “a huge opportunity” for HPE partners to disrupt, Davis said.

Among HPE’s differentiators in the flash market is its InfoSight predictive analytics technology, which HPE gained through its Nimble Storage acquisition. Now deployed in all HPE storage systems, InfoSight uses machine learning and AI to predict and remediate storage environment issues.

Davis said HPE has looked to gain a strong position by recently rolling out the HPE Store More Guarantee for the Nimble all-flash array. Launched in March, Store More Guarantee promises customers that the Nimble product “will store more data per raw terabyte of storage than any other vendor’s” all-flash array, according to HPE. The guarantee states that HPE will accommodate the customer’s additional storage for free if HPE Nimble all-flash array fails to meet the storage efficiency of a competitor’s array.

Long-term HPE partner plays

HPE’s Super Six long-term technology opportunities focus on generational changes occurring within customer organizations. Davis said these changes will be “long-lasting over the next three, five, maybe 10 years.”

[embedded content]

HPE executives discuss the flash storage market
at the Discover 2018 conference.

Everything as a service: HPE recognizes that the public cloud has transformed customer expectations of how IT should operate. The main expectations are that IT should be scalable and customers should pay only for what they use, Davis said. HPE’s solution for this market trend is HPE GreenLake consumption-based services delivered by HPE Pointnext. The company unveiled a partner program for HPE GreenLake Flex Capacity services on Monday.

“It is critical that we lead with everything as a service [and] we lead with a consumption model with HPE GreenLake because it is what customers want,” Davis said.

Software-defined infrastructure: HPE said it is looking to deliver an on-premises yet public cloud-like experience for customers at lower pricing compared with cloud. Core to that aim is the vendor’s software-defined strategy, which Davis said partners can think of as “going hand in hand” with HPE’s everything-as-a-service focus.

To cash in on the opportunity, HPE partners need to start customer conversations with HPE OneView, which manages the data center’s software-defined systems, he said. Synergy, HPE’s composable infrastructure platform, is another important piece for capturing the opportunity. Synergy “is really one solution for all of your customers’ workloads, whether they are virtualized, bare-metal [or] containers. It really gives a very flexible, automated experience to your customers with the maximum efficiency of resources,” he said.

HPE’s acquisition of Plexxi plays an important role, providing hyper-converged networking technology. “What [Plexxi] enables us to do is put that same composability that you have come to love in Synergy into hyper-converged,” he said.

Intelligent edge: The intelligent edge was a main area of focus at its Discover conference, held in conjunction with GPS. HPE revealed it would invest $4 billion over the next four years in intelligent edge technology and services.

“This is a massive, disruptive opportunity that we see,” Davis said, adding that the total addressable market is expected to reach $26 billion by 2020.

Antonio Neri, HPE’s president and CEO, told HPE Global Partner Summit 2018 attendees that “most of the data … is now generated on what we call ‘the edge.'” Seventy-five percent of data is generated on the edge, he added.

The edge is where “we live and work, where billions of people and trillions of things come together, interacting with each other but most importantly interacting with data in a secure way,” Neri noted. “That’s the opportunity: In this sea of 1s and 0s, how do we create value?”

Davis pointed to HPE’s Aruba Network’s portfolio as the means for HPE partners to jump into the intelligent edge market. Additionally, though HPE’s acquisition of Niara, Aruba IntroSpect user and entity behavior analytics technology can play a role.

“We see opportunities in every industry, everywhere around the globe,” Davis said.

Aruba taps ClearPass, Central for SD-Branch management

Aruba, a Hewlett Packard Enterprise company, has unveiled software-based wired and wireless networking for the branch that includes a cloud-managed software-defined WAN.

This week, Aruba introduced the software-defined branch technology at the HPE Discover conference in Las Vegas. The latest product, which comprises software and hardware, operates in conjunction with the Aruba Central cloud-based management platform and the Aruba ClearPass policy manager for network access control.

Combined with Aruba access points and switches, the system provides everything a customer needs to run a LAN and an SD-WAN. The latter is for routing traffic to and from the corporate data center, IoT devices and SaaS and IaaS applications. IoT devices could include surveillance cameras, point-of-sale systems, and air conditioning and heating systems.

Aruba’s offering is best-suited for smaller enterprises with a wireless-first strategy in the branch, said Will Townsend, an analyst at Moor Insights & Strategy, based in Austin, Texas. “When you look at SD-Branch and look at what Aruba is doing, it’s going to be ideally suited for a greenfield deployment — with mobile the trick — and a small to midmarket-type profile of the customer.”

Aruba SD-Branch components

SD-Branch is a recent concept. The approach simplifies networking by using one device for multiple services, such as routing and firewalls. Aruba’s multi-function device is a gateway appliance a customer would deploy on each site.

The device includes an SD-WAN that routes traffic across the branch’s various links, including MPLS, LTE and broadband. The hardware also executes ClearPass access policies for individuals, groups of people, desktops and mobile and IoT devices. IT staff create the policies that define the available infrastructure, applications and data.

“We’re collapsing that SD-WAN functionality into the gateway and now the gateway becomes the central point of policy enforcement within the branch,” said Lissa Hollinger, a vice president of product and solutions marketing at Aruba.

Aruba Central oversees the SD-WAN, as well as the branch’s access points (APs), switches and routers. The cloud-based application also stores reusable configuration templates for gateways, APs and switches. Central uses the ClearPass-generated templates to automatically provision new devices.

Other components of the Aruba system include a headend gateway at the corporate data center that creates an IPsec tunnel to each branch. The device also has a firewall with essential features for bidirectional filtering of data center traffic.

For customers that want more security, Aruba provides the option of integrating the branch gateway with cloud-based firewalls from Check Point Software Technologies, Palo Alto Networks and Zscaler.

“The integration of [data protection] for WAN services and ClearPass for policy management makes this a competitive offering in the marketplace,” said Mark Hung, an analyst at Gartner.

To lessen the workload of IT staff, Aruba offers a mobile installer app. When a gateway, switch or AP arrives at a branch office, a nontechnical person can scan its barcode with the app to ensure the device is for that location. The process avoids getting hardware that isn’t registered to download the preset configurations for that branch.

Primary users of LANs built with Aruba technology include businesses within the retail, hospitality and healthcare industries. Aruba’s largest enterprise customers typically have an IT staff of less than a dozen people managing from 2,500 to 3,000 branch offices, according to Hollinger. 

Aruba sells the SD-Branch technology as part of Aruba Central. The gateways have a starting price of $1,495, plus an annual subscription of $450. Aruba plans to release the technology in July.