Tag Archives: enterprises

SMBs struggle with data utilization, analytics

While analytics have become a staple of large enterprises, many small and medium-sized businesses struggle to utilize data for growth.

Large corporations can afford to hire teams of data scientists and provide business intelligence software to employees throughout their organizations. While many SMBs collect data that could lead to better decision-making and growth, data utilization is a challenge when there isn’t enough cash in the IT budget to invest in the right people and tools.

Sensing that SMBs struggle to use data, Onepath, an IT services vendor based in Kennesaw, Ga., conducted a survey of more than 100 businesses with 100 to 500 employees to gauge their analytics capabilities for the “Onepath 2020 Trends in SMB Data Analytics Report.”

Among the most glaring discoveries, the survey revealed that 86% of the companies that invested in personnel and analytics surveyed felt they weren’t able to fully exploit their data.

Phil Moore, Onepath’s director of applications management services, recently discussed both the findings of the survey and the challenges SMBs face when trying to incorporate analytics into their decision-making process.

In Part II of this Q&A, he talks about what failure to utilize data could ultimately mean for SMBs.

What was Onepath’s motivation for conducting the survey about SMBs and their data utilization efforts?

Phil MoorePhil Moore

Phil Moore: For me, the key finding was that we had a premise, a hypothesis, and this survey helped us validate our thesis. Our thesis is that analytics has always been a deep pockets game — people want it, but it’s out of reach financially. That’s talking about the proverbial $50,000 to $200,000 analytics project… Our goal and our mission is to bring that analytics down to the SMB market. We just had to prove our thesis, and this survey proves that thesis.

It tells us that clients want it — they know about analytics and they want it.

What were some of the key findings of the survey?

Moore: Fifty-nine percent said that if they don’t have analytics, it’s going to take them longer to go to market. Fifty-six percent said it will take them longer to service their clients without analytics capabilities. Fifty-four percent, a little over half, said if they didn’t have analytics, or when they don’t have analytics, they run the risk of making a harmful business decision.

We have people trying analytics — 67% are spending $10,000 a year or more, and 75% spent at least 132 hours of labor maintaining their systems — but they’re not getting what they need.
Phil MooreDirector of applications management services, Onepath

That tells us people want it… We have people trying analytics — 67% are spending $10,000 a year or more, and 75% spent at least 132 hours of labor maintaining their systems — but they’re not getting what they need. A full 86 % said they’re underachieving when they’re taking a swing with their analytics solution.

What are the key resources these businesses lack in order to fully utilize data? Is it strictly financial or are there other things as well?

Moore: We weren’t surprised, but what we hadn’t thought about is that the SMB market just doesn’t have the in-house skills. One in five said they just don’t have the people in the company to create the systems.

Might new technologies help SMBs eventually exploit data to its full extent?

Moore: The technologies have emerged and have matured, and one of the biggest things in the technology arena that helps bring the price down, or make it more available, is simply moving to the cloud. An on-premises analytics solution requires hardware, and it’s just an expensive footprint to get off the ground. But with Microsoft and their Azure Cloud and their Office 365, or their Azure Synapse Analytics offering, people can actually get to the technology at a far cheaper price point.

That one technology right there makes it far more affordable for the SMB market.

What about things like low-code/no-code platforms, natural language query, embedded analytics — will those play a role in helping SMBs improve data utilization for growth?

Moore: In the SMB market, they’re aware of things like machine learning, but they’re closer to the core blocking and tackling of looking at [key performance indicators], looking at cash dashboards so they know how much cash they have in the bank, looking at their service dashboard and finding the clients they’re ignoring.

The first and easiest one that’s going to apply to SMBs is low-code/no-code, particularly in grabbing their source data, transforming it and making it available for analytics. Prior to low-code/no-code, it’s really a high-code alternative, and that’s where it takes an army of programmers and all they’re doing is moving data — the data pipeline.

But there will be a set of the SMB market that goes after some of the other technologies like machine learning — we’ve seen some people be really excited about it. One example was looking at [IT help] tickets that are being worked in the service industry and comparing it with customer satisfaction. What they were measuring was ticket staleness, how many tickets their service team were ignoring, and as they were getting stale, their clients would be getting angry for lack of service. With machine learning, they were able to find that if they ignored a printer ticket for two weeks, that is far different than ignoring an email problem for two weeks. Ignoring an email problem for two days leads to a horrible customer satisfaction score. Machine learning goes in and relates that stuff, and that’s very powerful. The small and medium-sized business market will get there, but they’re starting at earlier and more basic steps.

Editor’s note: This Q&A has been edited for brevity and clarity.

Go to Original Article
Author:

Google Cloud support premium tier woos enterprise customers

Google Cloud has introduced a Premium Support option designed to appeal to large enterprises through features such as 15-minute response times for critical issues.

Premium Support customers will be serviced by “context-aware experts who understand your unique application stack, architecture and implementation details,” said Atul Nanda, vice president of cloud support.

These experts will coordinate with a customer’s assigned technical account manager to resolve issues faster and in a more personalized manner, Nanda said in a blog post.

Google wanted to expand its support offerings beyond what basic plans for Google Cloud and G Suite include, according to Nanda. Other Premium Support features include operational health reviews, training, preview access to new products and more help with third-party technologies.

In contrast, Google’s other support options range from a free tier that provides help with only billing issues; Development, which costs $100 per user per month, with a four-hour response time; and Production, which costs $250 per user per month and has a one-hour response time.

Premium Support carries a base annual fee of $150,000 plus 4% of the customer’s net spending on Google Cloud Platform and/or G Suite. Google is also working on add-on services for Premium Support, such as expanded technical account manager coverage and mission-critical support, which involves a site reliability engineering consulting engagement. The latter is now in pilot.

Cloud changes the support equation

Customers with on-premises software licenses are used to paying stiff annual maintenance fees, which give them updates, bug fixes and technical support. On-premises maintenance fees can generate profit margins for vendors north of 90%, consuming billions of IT budget dollars that could have been spent on better things, said Duncan Jones, an analyst at Forrester.

Duncan JonesDuncan Jones

Google is recognizing they need to move up the stack in terms of support to make further inroads into the enterprise space.
Grant KirkwoodCTO, Unitas Global

“But customers of premium support offerings such as Microsoft Unified (fka Premier) Support and SAP MaxAttention express much higher satisfaction levels with value for money,” Jones said via email. “They are usually an alternative to similar services that the vendor’s SI and channel partners offer, so there is competition that drives up standards. Plus, they are optional extras so price/demand sensitivity keeps pricing at reasonable levels.” On the whole, Google’s move to add Premium Support is positive for customers, according to Jones.

But it’s clear why Google did it from a business perspective, said Grant Kirkwood, CTO of Unitas Global, a hybrid cloud services provider in Los Angeles. “Google is recognizing they need to move up the stack in terms of support to make further inroads into the enterprise space,” he said.

Microsoft today probably has the most robust support in terms of a traditional enterprise look-and-feel, while AWS’ approach is geared a bit more toward DevOps-centric shops, Kirkwood added.

“[Google is] taking a bit out of both playbooks,” he said. Premium Support could appeal to enterprises that have already done easier lift-and-shift projects to the cloud and are now rebuilding or creating new cloud-native applications, according to Kirkwood.

But as with anything, Google will have to prove its Premium Support option is worth the extra money.

“Successful [support] plans require great customer success management, highly trained technical account managers and AI-driven case management,” said Ray Wang, founder and CEO of Constellation Research.

Go to Original Article
Author:

BigID: New privacy regulations have ended ‘the data party’

The ‘data party’ era of enterprises indiscriminating, collecting, storing and selling users’ personal information is coming to an end, according to BigID.

A New York-based startup, BigID was formed in 2015 with the goal of improving enterprise data management and protection in the age of GDPR and the California Consumer Privacy Act (CCPA). The company, which won the 2018 Innovation Sandbox Contest at RSA Conference, recently raised $50 million in Series C funding. Now BigID is expanding its mission to help enterprises better understand and control their data amid new privacy regulations.

BigID co-founder and chief product officer Nimrod Vax talks with SearchSecurity about how new regulations have effectively ended the data party. He also discusses BigID’s launch, its future and whether data protection is getting easier or harder.

Editor’s note: This interview has been edited for length and clarity.

How was BigID founded?

Nimrod Vax: Dimitri [Sirota, CEO] and I were the company’s two founders. At my last kind-of real job I was head of the identity product line at CA, and at the time CA acquired Dimitri’s company, Layer 7 Technologies. That’s how we met, so we got to work together on challenges of customers around identity management and security. After we left CA, at the time, there was a big surge of breaches of personal information through incidents like the Ashley Madison scandal and LinkedIn and Twitter. And what was really surprising about those breaches was that they were breaches of what you would think is very sensitive information. It wasn’t nuclear plans or anything; it was really just lists of names and addresses and phone numbers, but it was millions and billions of them. The following year, there were four billion personal records stolen. And the question that we asked ourselves was that with all of these security tools that are out there, why are these breaches still happening? And we learned that data protection tools that were available at the time and even today were not purposely built to protect and discover and manage personal information. They were really very generic and were not built for that. And also, these scandals kind of raised visibility and awareness of privacy. The legislation has picked up and we have GDPR coming and later CCPA, so we’ve identified the opportunity to help software organizations address those needs and meet the requirements of these regulations.

What does BigID do?

Vax: BigID’s aim is to help organizations better understand what data they store about their customers and in general, and then allow them to take action on top of that and comply with regulations and better protect the data and better manage it to get more value out of the data. In order to do that, BigID is able to connect all data sources. We have over 60 different connectors to all the things you could even think about that you may have in an IT organization. All of the relational databases, all of the unstructured data sources, semistructured data, big data repositories, anything in AWS, business applications like SAP, Salesforce, Workspace, you name it. We connect to anything, and then search for and classify the data. We first and foremost catalog everything so you have a full catalog of all the data that you have. We classify that data, and tell you what type of data that is — where do you have user IDs? Where do you have phone numbers? We help to cluster it, so we can find similar types of data without knowing anything about the data; just knowing the content that’s similar to other data that helps cluster it. Our claim to fame is our ability to correlate it. We can find Social Security numbers whose Social Security number it is and that allows you to distinguish between customer data, American data, European resident data, children or adult information, and also being able to know who’s data it is for access rights and who to notify regarding a breach.

The solution is specifically built on premises, but it’s a modern enterprise software. It’s completely containerized and documented for containers. It automatically scales up and down and doesn’t require any agents on the endpoint; it connects using open APIs, and we don’t copy the data — we just house the data and that’s important because we don’t want to create a security problem. We also don’t want to incur a lot of additional storage.

And lastly, and I think this is very important, the discovery layer is all exposed to a well-documented set of APIs so that you can query that information and make it accessible to applications, and we build applications on top of that.

We’re obviously generating more and more user data every single day. Does data protection and data governance become exponentially harder as time goes on? And if so, how do you keep up with that explosion of user data?

Vax: One of the problems that led to BigID was the fact that organizations now have the knowledge and technology that allow them to store unlimited amounts of data. If you look at big data repositories, it’s all about storing truckloads of data; organizations are collecting as much as they can and they’re never deleting the data. That is a big challenge for them, not only to protect the data but even to gain value from the data. Information flows into the organization through so many different channels — from applications, from websites and from partners. Different business units are collecting data and they’re not consolidating it, so all the goodness of the ability to process all that data comes with a burden. How do I make more use of that data? How do I consolidate the data? How do I gain visibility into the data I own and have access to? That complexity requires a different approach to data discovery and data management, and that approach first requires you to be big data native; you need to be able to run in those big data repositories natively and not have to stream the data outside like the old legacy tools; you need to be able to scan data at the source, at the ingestion point, as data flows into these warehouses. What we recently introduced [with Data Pipeline Discovery] is the ability to scan data streams in services like Kafka or [AWS] Kinesis so as the data flows into those data lakes, we’re able to classify that data and understand it.

Regarding the CCPA, how much impact do you think it will have on how enterprise data is governed?

Nobody wants to be on the board of shame of the CCPA.
Nimrod VaxCo-founder, BigID

Vax: We’re seeing that effect already, and it goes back to the data party that’s been happening in the past five years. There’s been a party of data where organizations have collected as much data as they wanted without any liabilities or without any guardrails around them. Now with the CCPA and GDPR, they are bringing that additional layer of governing. You can still collect as much information as you want, but you need to protect it. You have obligations to the people from whom you are collecting the data, and that brings more governance to the data process. Now organizations need to be much more careful about that. The organization needs to have more visibility into the data not because it’s good to have it but because we have to have it for the regulations; you can’t protect, you can’t govern, and you can’t control what you don’t know, so that’s the big shift in the approach that CCPA brings to the table. Organizations are already getting prepared for that. We’re already seeing the effect that organizations are taking it very seriously and they don’t want to be the first ones to be dinged by the regulation. It’s not even the financial impact. It’s more reputational impact they are concerned about; nobody wants to be on the board of shame of the CCPA. They want to send a message to their customers that they care about privacy — not that they’re careless about it. I think that’s the big impact that we’re seeing.

What do the next 12 months look like for the company?

Vax: We’re growing rapidly both in product and in staff and in general — I think we’re about 150 people now. Last year, I think we were less than 30. We’re continuing to grow, and that growth is in two areas: on the product side and on extending to additional audiences. We are continuing to invest in our core discovery capabilities. We’re also building more apps. We’re going to solve more difficult problems in privacy and security and governance. We’re also extending to new audiences. Today, we are primarily focusing on building solutions or offerings for developers so that they can leverage our API and building process. For the next area, we are focusing on putting built-in privacy into the applications seamlessly with zero friction.

Go to Original Article
Author:

Government IT pros: Hiring data scientists isn’t an exact science

WASHINGTON, D.C. — Government agencies face the same problems as enterprises when it comes to turning their vast data stores into useful information. In the case of government, that information is used to provide services such as healthcare, scientific research, legal protections and even to fight wars.

Public sector IT pros at the Veritas Public Sector Vision Day this week talked about their challenges in making data useful and keeping it secure. A major part of their work currently involves finding the right people to fill data analytical roles, including hiring data scientists. They described data science skills as a combination of roles that require technical, as well as subject matter expertise, which often requires a diverse team to become successful.

Tiffany Julian, data scientist at the National Science Foundation, said she recently sat in on a focus group involved with the Office of Personnel Management’s initiative to define data scientist.

“One of the big messages from that was, there’s no such thing as a unicorn. You don’t hire a data scientist. You create a team of people who do data science together,” Julian said.

Julian said data science includes more than programmers and technical experts. Subject experts who know their company or agency mission also play a role.

“You want your software engineers, you want your programmers, you want your database engineers,” she said. “But you also want your common sense social scientists involved. You can’t just prioritize one of those fields. Let’s say you’re really good at Python, you’re really good at R. You’re still going to have to come up with data and processes, test it out, draw a conclusion. No one person you hire is going to have all of those skills that you really need to make data-driven decisions.”

Wanted: People who know they don’t know it all

Because she is a data scientist, Julian said others in her agency ask what skills they should seek when hiring data scientists.

You don’t hire a data scientist. You create a team of people who do data science together.
Tiffany JulianData scientist, National Science Foundation

“I’m looking for that wisdom that comes from knowing that I don’t know everything,” she said. “You’re not a data scientist, you’re a programmer, you’re an analyst, you’re one of these roles.”

Tom Beach, chief data strategist and portfolio manager for the U.S. Patent and Trademark Office (USPTO), said he takes a similar approach when looking for data scientists.

“These are folks that know enough to know that they don’t know everything, but are very creative,” he said.

Beach added that when hiring data scientists, he looks for people “who have the desire to solve a really challenging problem. There is a big disconnect between an abstract problem and a piece of code. In our organization, a regulatory agency dealing with patents and trademarks, there’s a lot of legalese and legal frameworks. Those don’t code well. Court decisions are not readily codable into a framework.”

‘Cloud not enough’

Like enterprises, government agencies also need to get the right tools to help facilitate data science. Peter Ranks, deputy CIO for information enterprise at the Department of Defense, said data is key to his department, even if DoD IT people often talk more about technologies such as cloud, AI, cybersecurity and the three Cs (command, control and communications) when they discuss digital modernization.

“What’s not on the list is anything about data,” he said. “And that’s unfortunate because data is really woven into every one of those. None of those activities are going to succeed without a focused effort to get more utility out of the data that we’ve got.”

Ranks said future battles will depend on the ability of forces on land, air, sea, space and cyber to interoperate in a coordinated fashion.

“That’s a data problem,” he said. “We need to be able to communicate and share intelligence with our partners. We need to be able to share situational awareness data with coalitions that may be created on demand and respond to a particular crisis.”

Ranks cautioned against putting too much emphasis on leaning on the cloud for data science. He described cloud as the foundation on the bottom of a pyramid, with software in the middle and data on top.

“Cloud is not enough,” he said. “Cloud is not a strategy. Cloud is not a destination. Cloud is not an objective. Cloud is a tool, and it’s one tool among many to achieve the outcomes that your agency is trying to get after. We find that if all we do is adopt cloud, if we don’t modernize software, all we get is the same old software in somebody else’s data center. If we modernize software processes but don’t tackle the data … we find that bad data becomes a huge boat anchor or that all those modernized software applications have to drive around. It’s hard to do good analytics with bad data. It’s hard to do good AI.”

Beach agreed. He said cloud is “100%” part of USPTO’s data strategy, but so is recognition of people’s roles and responsibilities.

“We’re looking at not just governance behavior as a compliance exercise, but talking about people, process and technology,” he said. “We’re not just going to tech our way out of a situation. Cloud is just a foundational step. It’s also important to understand the recognition of roles and responsibilities around data stewards, data custodians.”

This includes helping ensure that people can find the data they need, as well as denying access to people who do not need that data.

Nick Marinos, director of cybersecurity and data protection at the Government Accountability Office, said understanding your data is a key step in ensuring data protection and security.

“Thinking upfront about what data do we actually have, and what do we use the data for are really the most important piece questions to ask from a security or privacy perspective,” he said. “Ultimately, having an awareness of the full inventory within the federal agencies is really all the way that you can even start to approach protecting the enterprise as a whole.”

Marinos said data protection audits at government agencies often start with looking at the agency’s mission and its flow of data.

“Only from there can we as auditors — and the agency itself — have a strong awareness of how many touch points there are on these data pieces,” he said. “From a best practice perspective, that’s one of the first steps.”

Go to Original Article
Author:

Windows Server 2008 end of life: Is Azure the right path?

As the Windows Server 2008 end of life inches closer, enterprises should consider which retirement plan to pursue before security updates run out.

As of Jan. 14, Microsoft will end security updates for Windows Server 2008 and 2008 R2 machines that run in the data center. Organizations that continue to use these server operating systems will be vulnerable because hackers will inevitably continue to look for weaknesses in them, but Microsoft will not — except in rare circumstances — provide fixes for those vulnerabilities. Additionally, Microsoft will not update online technical content related to these operating systems or give any free technical support.

Although there are benefits to upgrading to a newer version of Windows Server, there may be some instances in which this is not an option. For example, your organization might need an application that is not compatible with or supported on newer Windows Server versions. Similarly, there are situations in which it is possible to migrate the server to a new operating system, but not quickly enough to complete the process before the impending end-of-support deadline.

Microsoft has a few options for those organizations that need to continue running Windows Server 2008 or 2008 R2. Although the company will no longer give updates for the aging operating system through the usual channels, customers can purchase extended security updates.

You can delay Windows Server 2008 end of life — if you can afford it

Those who wish to continue using Windows Server 2008 or 2008 R2 on premises will need Software Assurance or a subscription license to purchase extended updates. The extended updates are relatively expensive, or about 75% of the cost of a current version Windows Server license annually. This is likely Microsoft’s way of trying to get customers to migrate to a newer Windows Server version because the extended security updates cost almost as much as a Windows Server license.

The other option for those organizations that need to continue running Windows Server 2008 or 2008 R2 is to migrate those servers to the Azure cloud. Organizations that decide to switch those workloads to Azure will receive free extended security updates for three years.

Servers often have multiple dependencies, and you will need to address these as part of the migration planning.

Know what a move to Azure entails

Before migrating a Windows Server workload to the cloud, it is important to consider the pros and cons of making the switch to Azure. The most obvious benefit is financial and gives you a few years to run this OS without the hassle of having to pay for extended security updates.

Another benefit to the migration to Azure is a reduction in hardware-related costs. Windows Server 2008 was the first Windows Server version to include Hyper-V, but many organizations opted to install Windows Server 2008 onto physical hardware rather than virtualizing it. If your organization runs Windows Server 2008/2008 R2 on a physical server, then this is a perfect opportunity to retire the aging server hardware.

If your Windows Server 2008/2008 R2 workloads are virtualized, then moving those VMs to Azure can free up some capacity on the virtualization hosts for other workloads.

Learn about the financial and technical impact

One disadvantage to operating your servers in Azure is the cost. You will pay a monthly fee to run Windows Server 2008 workloads in the cloud. However, it is worth noting that Microsoft offers a program called the Azure Hybrid Benefit, which gives organizations with Windows Server licenses 40% off the cost of running eligible VMs in the cloud. To get an idea of how much your workloads might cost, you can use a calculator and find more details at this link.

Another disadvantage with moving a server workload to Azure is the increased complexity of your network infrastructure. This added complication isn’t limited just to the migrating servers. Typically, you will have to create a hybrid Active Directory environment and also create a VPN that allows secure communications between your on-premises network and the Azure cloud.

Factor in these Azure migration considerations

For organizations that decide to migrate their Windows Server 2008 workloads to Azure, there are a number of potential migration issues to consider.

Servers often have multiple dependencies, and you will need to address these as part of the migration planning. For instance, an application may need to connect to a database that is hosted on another server. In this situation, you will have to decide whether to migrate the database to Azure or whether it is acceptable for the application to perform database queries across a WAN connection.

Similarly, you will have to consider the migration’s impact on your internet bandwidth. Some of your bandwidth will be consumed by management traffic, directory synchronizations and various cloud processes. It’s important to make sure your organization has enough bandwidth available to handle this increase in traffic.

Finally, there are differences between managing cloud workloads and ones in your data center. The Azure cloud has its own management interface that you will need to learn. Additionally, you may find your current management tools either cannot manage cloud-based resources or may require a significant amount of reconfiguring. For example, a patch management product might not automatically detect your VM in Azure; you may need to either create a separate patch management infrastructure for the cloud or provide the vendor with a path to your cloud-based resources.

Go to Original Article
Author:

Kubernetes security opens a new frontier: multi-tenancy

SAN DIEGO — As enterprises expand production container deployments, a new Kubernetes security challenge has emerged: multi-tenancy.

Among the many challenges with multi-tenancy in general is that it is not easy to define, and few IT pros agree on a single definition or architectural approach. Broadly speaking, however, multi-tenancy occurs when multiple projects, teams or tenants, share a centralized IT infrastructure, but remain logically isolated from one another.

Kubernetes multi-tenancy also adds multilayered complexity to an already complex Kubernetes security picture, and demands that IT pros wire together a stack of third-party and, at times, homegrown tools on top of the core Kubernetes framework.

This is because core upstream Kubernetes security features are limited to service accounts for operations such as role-based access control — the platform expects authentication and authorization data to come from an external source. Kubernetes namespaces also don’t offer especially granular or layered isolation by default. Typically, each namespace corresponds to one tenant, whether that tenant is defined as an application, a project or a service.

“To build logical isolation, you have to add a bunch of components on top of Kubernetes,” said Karl Isenberg, tech lead manager at Cruise Automation, a self-driving car service in San Francisco, in a presentation about Kubernetes multi-tenancy here at KubeCon + CloudNativeCon North America 2019 this week. “Once you have Kubernetes, Kubernetes alone is not enough.”

Karl Isenberg, Cruise Automation
Karl Isenberg, tech lead manager at Cruise Automation, presents at KubeCon about multi-tenant Kubernetes security.

However, Isenberg and other presenters here said Kubernetes multi-tenancy can have significant advantages if done right. Cruise, for example, runs very large Kubernetes clusters, with up to 1,000 nodes, shared by thousands of employees, teams, projects and some customers. Kubernetes multi-tenancy means more highly efficient clusters and cost savings on data center hardware and cloud infrastructure.

“Lower operational costs is another [advantage] — if you’re starting up a platform operations team with five people, you may not be able to manage five [separate] clusters,” Isenberg added. “We [also] wanted to make our investments in focused areas, so that they applied to as many tenants as possible.”

Multi-tenant Kubernetes security an ad hoc practice for now

The good news for enterprises that want to achieve Kubernetes multi-tenancy securely is that there are a plethora of third-party tools they can use to do it, some of which are sold by vendors, and others open sourced by firms with Kubernetes development experience, including Cruise and Yahoo Media.

Duke Energy Corporation, for example, has a 60-node Kubernetes cluster in production that’s stretched across three on-premises data centers and shared by 100 web applications so far. The platform is comprised of several vendors’ products, from Diamanti hyper-converged infrastructure to Aqua Security Software’s container firewall, which logically isolates tenants from one another at a granular level that accounts for the ephemeral nature of container infrastructure.

“We don’t want production to talk to anyone [outside of it],” said Ritu Sharma, senior IT architect at the energy holding company in Charlotte, N.C., in a presentation at KubeSec Enterprise Summit, an event co-located with KubeCon this week. “That was the first question that came to mind — how to manage cybersecurity when containers can connect service-to-service within a cluster.”

Some Kubernetes multi-tenancy early adopters also lean on cloud service providers such as Google Kubernetes Engine (GKE) to take on parts of the Kubernetes security burden. GKE can encrypt secrets in the etcd data store, which became available in Kubernetes 1.13, but isn’t enabled by default, according to a KubeSec presentation by Mike Ruth, one of Cruise’s staff security engineers.

Google also offers Workload Identity, which matches up GCP identity and access management with Kubernetes service accounts so that users don’t have to manage Kubernetes secrets or Google Cloud IAM service account keys themselves. Kubernetes SIG-Auth looks to modernize how Kubernetes security tokens are handled by default upstream to smooth Kubernetes secrets management across all clouds, but has run into snags with the migration process.

In the meantime, Verizon’s Yahoo Media has donated a project called Athenz to open source, which handles multiple aspects of authentication and authorization in its on-premises Kubernetes environments, including automatic secrets rotation, expiration and limited-audience policies for intracluster communication similar to those offered by GKE’s Workload Identity. Cruise also created a similar open source tool called RBACSync, along with Daytona, a tool that fetches secrets from HashiCorp Vault, which Cruise uses to store secrets instead of in etcd, and injects them into running applications, and k-rail for workload policy enforcement.

Kubernetes Multi-Tenancy Working Group explores standards

While early adopters have plowed ahead with an amalgamation of third-party and homegrown tools, some users in highly regulated environments look to upstream Kubernetes projects to flesh out more standardized Kubernetes multi-tenancy options.

For example, investment banking company HSBC can use Google’s Anthos Config Management (ACM) to create hierarchical, or nested, namespaces, which make for more highly granular access control mechanisms in a multi-tenant environment, and simplifies their management by automatically propagating shared policies between them. However, the company is following the work of a Kubernetes Multi-Tenancy Working Group established in early 2018 in the hopes it will introduce free open source utilities compatible with multiple public clouds.

Sanjeev Rampal, Kubernetes Multi-Tenancy Working Group
Sanjeev Rampal, co-chair of the Kubernetes Multi-Tenancy Working Group, presents at KubeCon.

“If I want to use ACM in AWS, the Anthos license isn’t cheap,” said Scott Surovich, global container engineering lead at HSBC, in an interview after a presentation here. Anthos also requires VMware server virtualization, and hierarchical namespaces available at the Kubernetes layer could offer Kubernetes multi-tenancy on bare metal, reducing the layers of abstraction and potentially improving performance for HSBC.

Homegrown tools for multi-tenant Kubernetes security won’t fly in HSBC’s highly regulated environment, either, Surovich said.

“I need to prove I have escalation options for support,” he said. “Saying, ‘I wrote that’ isn’t acceptable.”

So far, the working group has two incubation projects that create custom resource definitions — essentially, plugins — that support hierarchical namespaces and virtual clusters that create self-service Kubernetes API Servers for each tenant. The working group has also created working definitions of the types of multi-tenancy and begun to define a set of reference architectures.

The working group is also considering certification of multi-tenant Kubernetes security and management tools, as well as benchmark testing and evaluation of such tools, said Sanjeev Rampal, a Cisco principal engineer and co-chair of the group.

Go to Original Article
Author:

Microsoft’s new approach to hybrid: Azure services when and where customers need them | Innovation Stories

As business computing needs have grown more complex and sophisticated, many enterprises have discovered they need multiple systems to meet various requirements – a mix of technology environments in multiple locations, known as hybrid IT or hybrid cloud.

Technology vendors have responded with an array of services and platforms – public clouds, private clouds and the growing edge computing model – but there hasn’t necessarily been a cohesive strategy to get them to work together.

We got here in an ad hoc fashion,” said Erik Vogel, global vice president for customer experience for HPE GreenLake at Hewlett Packard Enterprise. Customers didn’t have a strategic model to work from.

Instead, he said, various business owners in the same company may have bought different software as a service (SaaS) applications, or developers may have independently started leveraging Amazon Web Services, Azure or Google Cloud Platform to develop a set of applications.

At its Ignite conference this week in Orlando, Florida, Microsoft announced its solution to such cloud sprawl. The company has launched a preview of Azure Arc, which offers Azure services and management to customers on other clouds or infrastructure, including those offered by Amazon and Google.

John JG Chirapurath, general manager for Azure data, blockchain and artificial intelligence at Microsoft, said the new service is both an acknowledgement of, and a response to, the reality that many companies face today. They are running various parts of their businesses on different cloud platforms, and they also have a lot of data stored on their own new or legacy systems.

In all those cases, he said, these customers are telling Microsoft they could use the benefits of Azure cloud innovation whether or not their data is stored in the cloud, and they could benefit from having the same Azure capabilities – including security safeguards – available to them across their entire portfolio.

We are offering our customers the ability to take their services, untethered from Azure, and run them inside their own datacenter or in another cloud,” Chirapurath said.

Microsoft says Azure Arc builds on years of work the company has done to serve hybrid cloud needs. For example, Azure Resource Manager, released in 2014, was created with the vision that it would manage resources outside of Azure, including in companies’ internal servers and on other clouds.

That flexibility can help customers operate their services on a mix of clouds more efficiently, without purchasing new hardware or switching among cloud providers. Companies can use a public cloud to obtain computing power and data storage from an outside vendor, but they can also house critical applications and sensitive data on their own premises in a private cloud or server.

Then there’s edge computing, which stores data where the user is, in between the company and the public cloud for example, on their customers’ mobile devices or on sensors in smart buildings like hospitals and factories.

YouTube Video

That’s compelling for companies that need to run AI models on systems that aren’t reliably connected to the cloud, or to make computations more quickly than if they had to send large amounts of data to and from the cloud. But it also must work with companies’ cloud-based, internet-connected systems.

“A customer at the edge doesn’t want to use different app models for different environments,” said Mark Russinovich, Azure chief technology officer. “They need apps that span cloud and edge, leveraging the same code and same management constructs.”

Streamlining and standardizing a customer’s IT structure gives developers more time to build applications that produce value for the business instead of managing multiple operating models. And enabling Azure to integrate administrative and compliance needs across the enterprise – automating system updates and security enhancements brings additional savings in time and money.

“You begin to free up people to go work on other projects, which means faster development time, faster time to market,” said HPE’s Vogel. HPE is working with Microsoft on offerings that will complement Azure Arc.

Arpan Shah, general manager of Azure infrastructure, said Azure Arc allows companies to use Azure’s governance tools for their virtual machines, Kubernetes clusters and data across different locations, helping ensure companywide compliance on things like regulations, security, spending policies and auditing tools.

Azure Arc is underpinned in part by Microsoft’s commitment to technologies that customers are using today, including virtual machines, containers and Kubernetes, an open source system for organizing and managing containers. That makes clusters of applications easily portable across a hybrid IT environment – to the cloud, the edge or an internal server.

“It’s easy for a customer to put that container anywhere,” Chirapurath said. “Today, you can keep it here. Tomorrow, you can move it somewhere else.”

Microsoft says these latest Azure updates reflect an ongoing effort to better understand the complex needs of customers trying to manage their Linux and Windows servers, Kubernetes clusters and data across environments.

“This is just the latest wave of this sort of innovation,” Chirapurath said. “We’re really thinking much more expansively about customer needs and meeting them according to how they’d like to run their applications and services.”

Top image: Erik Vogel, global vice president for customer experience for HPE GreenLake at Hewlett Packard Enterprise, with a prototype of memory-driven computing. HPE is working with Microsoft on offerings that will complement Azure Arc. Photo by John Brecher for Microsoft.

Related:

Go to Original Article
Author: Microsoft News Center

DevOps security shifts left, but miles to go to pass hackers

DevOps security processes have matured within enterprises over the last year, but IT shops still have far to go to stem the tide of data breaches.

DevOps teams have built good security habits almost by default as they have increased the frequency of application releases and adopted infrastructure and security automation to improve software development. More frequent, smaller, automated app deployments are less risky and less prone to manual error than large and infrequent ones.

Microservices management and release automation demand tools such as infrastructure as code and configuration management software to manage infrastructure, which similarly cut down on human error. Wrapped up into a streamlined GitOps process, Agile and DevOps techniques automate the path to production while locking down access to it — a win for both security and IT efficiency.

However, the first six months of 2019 saw such a flood of high-profile data breaches that at least one security research firm called it the worst year on record. And while cybersecurity experts aren’t certain how trustworthy that measurement is — there could just be more awareness of breaches than there used to be, or more digital services to attack than in past years — they feel strongly that DevOps security teams still aren’t staying ahead of attackers, who have also learned to automate and optimize what they do.

Adrian Sanabria, advocate at Thinkst Applied ResearchAdrian Sanabria

“The attackers have innovated, and that’s one of the problems with our industry — we’re at least five years behind the attackers,” said Adrian Sanabria, advocate at Thinkst Applied Research, a cybersecurity research and software firm based in South Africa. “We’re in a mode where we’re convinced, with all this VC money and money spent on marketing, that we have to wait for a product to be available to solve these problems … and they’re never going to be ready in time.”

DevOps security tools aren’t enough

A cybersecurity tool is only as good as how it’s used, Sanabria said, citing the example of a Target breach in 2013, where security software detected potentially malicious activity, but IT staff didn’t act on its warnings. In part, this was attributed to alert fatigue, as IT teams increasingly deal with a fire hose of alerts from various monitoring systems. But it also has to do with IT training, Sanabria said.

“In the breach research I’ve done, generally everyone owned [the tools] they needed to own,” he said. “They either didn’t know how to use it, hadn’t set it up correctly, or they had some kind of process issue where the [tools] did try to stop the attacks or warn them of it, [but] they either didn’t see the alert or didn’t act on the alert.”

The attackers have innovated, and that’s one of the problems with our industry — we’re at least five years behind the attackers.
Adrian SanabriaAdvocate, Thinkst Applied Research

DevOps security, or DevSecOps, teams have locked down many of the technical weak points within infrastructure and app deployment processes, but all too often, the initial attack takes a very human form, such as a spoofed email that seems to come from a company executive, directing the recipient to transfer funds to what turns out to be an attacker’s account.

“Often, breaches don’t even require hacking,” Sanabria said. “It requires understanding of financial processes, who’s who in the company and the timing of certain transactions.”

Preventing such attacks requires that employees be equally familiar with that information, Sanabria said. That lack of awareness is driving a surge in ransomware attacks, which rely almost entirely on social engineering to hold vital company data hostage.

Collaboration and strategy vital for DevOps security

Thus, in a world of sophisticated technology, the biggest problems remain human, according to experts — and their solutions are also rooted in organizational dynamics and human collaboration, starting with a more strategic, holistic organizational approach to IT security.

Jeremy Pullen, PolodisJeremy Pullen

“Technology people don’t think of leadership skills and collaboration as primary job functions,” said Jeremy Pullen, CEO of Polodis, a digital transformation consulting firm in Atlanta. “They think the job is day-to-day technical threat remediation, but you can’t scale your organization when you have people trying to do it all themselves.”

An overreliance on individual security experts within enterprises leads to a ‘lamppost effect,’ where those individuals overcompensate for risks they’re familiar with, but undercompensate in areas they don’t understand as well, Pullen said. That kind of team structure also results in the time-honored DevOps bugaboo of siloed responsibilities, which increases security fragility in the same way it dampens application performance and infrastructure resilience.

“Developers and operations may be blind to application security issues, while security tends to focus on physical and infrastructure security, which is most clearly defined in their threat models,” Pullen said. “Then it becomes a bit of a game of Whac-a-Mole … where you’re trying to fix one thing and then another thing pops up, and it gets really noisy.”

Instead, DevSecOps teams must begin to think of themselves and their individual job functions as nodes in a network rather than layers of a stack, Pullen said, and work to understand how the entire organization fits together.

“Everyone’s unclear about what enterprise architecture is,” he said. “They stick Jenkins in the middle of a process but might not understand that they need to separate that environment into different domains and understand governance boundaries.”

Effective DevOps security requires more team practice

Strategically hardening applications and IT management processes to prevent attacks is important, but organizations must also strategically plan — and practice — their response to ongoing security incidents that can and will still happen.

“Cybersecurity so far has been focused on solitary study and being the best technical practitioner you can be, and building stand-alone applications and infrastructure to the best technical standard, which reminds me of golf,” said Nick Drage, principal consultant at Path Dependence Ltd., a cybersecurity consulting firm based in the U.K., in a presentation at DevSecCon in Seattle last month. “But in reality, cybersecurity is a fight with an opponent over territory — much more like American football.”

As long as security is practiced by isolated individuals, it will be as effective as taking the football field armed with golf clubs, Drage said. Instead, the approach should be more team-oriented, cooperative, and, especially, emphasize team practice to prepare for ‘game time.’

This is the future of governance — controlling risk on the human side of our systems.
Charles BetzAnalyst, Forrester Research

American football defenses are particularly instructive for DevOps security strategy ideas about defense in depth, Drage said in his presentation. Among other things, they demonstrate that an initial incursion into a team’s territory — yards gained — does not amount to a breach — points scored. IT teams should also apply that thinking as they try to anticipate and respond to threats — how to protect the ‘end zone,’ so to speak, and not just their half of the field.

Thinkst’s Sanabria uses a different analogy — the DevOps security team as firefighters.

“We’re not going to get good at this if we don’t practice it,” he said. “We buy all the tools, but imagine firefighters if they’d never donned the suits, never driven the truck, never used the hose and they’re not expecting the amount of force and it knocks them down. Going out to their first fire would look like a comedy.”

And yet that’s exactly what happens with many enterprise IT security teams when they must respond to incidents, Sanabria said, in part because companies don’t prioritize experiential learning over informational training.

The good news is that IT analysts expect the next wave of DevOps security to look very much like chaos engineering used in many organizations to improve system resiliency, but with a human twist. Organizations have begun to emerge such as OpenSOC, which sets up training workshops, including simulated ransomware attacks, for companies to practice security incident response. Companies can also do this internally by treating penetration tests as real attacks, otherwise known as red teaming. Free and open source tools such as Infection Monkey from Guardicore Labs also simulate attack scenarios.

Charles Betz, Forrester ResearchCharles Betz

Tech companies such as such as Google already practice their own form of human-based chaos testing, where employees are selected at random for a ‘staycation,’ directed to take a minimum of one hour to answer work emails, or to intentionally give wrong answers to questions, to test the resiliency of the rest of the organization.

“Despite the implications of the word ‘chaos,’ some companies are already presenting chaos engineering to their risk management leaders and auditors,” said Charles Betz, analyst at Forrester Research. “This is the future of governance — controlling risk on the human side of our systems.”

Go to Original Article
Author:

How to work with the WSUS PowerShell module

In many enterprises, you use Windows Server Update Services to centralize and distribute Windows patches to end-user devices and servers.

WSUS is a free service that installs on Windows Server and syncs Windows updates locally. Clients connect to and download patches from the server. Historically, you manage WSUS with a GUI, but with PowerShell and the PoshWSUS community module, you can automate your work with WSUS for more efficiency. This article will cover how to use some of the common cmdlets in the WSUS PowerShell module, found at this link.

Connecting to a WSUS server

The first task to do with PoshWSUS is to connect to an existing WSUS server so you can run cmdlets against it. This is done with the Connect-PSWSUSServer cmdlet. The cmdlet provides the option to make a secure connection, which is normally on port 8531 for SSL.

Connect-PSWSUSServer -WsusServer wsus -Port 8531 -SecureConnection
Name Version PortNumber ServerProtocolVersion
---- ------- ---------- ---------------------
wsus 10.0.14393.2969 8530 1.20

View the WSUS clients

There are various cmdlets used to view WSUS client information. The most apparent is Get-PSWSUSClient, which shows client information such as hostname, group membership, hardware model and operating system type. The example below gets information on a specific machine named Test-1.

Get-PSWSUSClient Test-1 | Select-Object *
ComputerGroup : {Windows 10, All Computers}
UpdateServer : Microsoft.UpdateServices.Internal.BaseApi.UpdateServer
Id : 94a2fc62-ea2e-45b4-97d5-10f5a04d3010
FullDomainName : Test-1
IPAddress : 172.16.48.153
Make : HP
Model : HP EliteDesk 800 G2 SFF
BiosInfo : Microsoft.UpdateServices.Administration.BiosInfo
OSInfo : Microsoft.UpdateServices.Administration.OSInfo
OSArchitecture : AMD64
ClientVersion : 10.0.18362.267
OSFamily : Windows
OSDescription : Windows 10 Enterprise
ComputerRole : Workstation
LastSyncTime : 9/9/2019 12:06:59 PM
LastSyncResult : Succeeded
LastReportedStatusTime : 9/9/2019 12:18:50 PM
LastReportedInventoryTime : 1/1/0001 12:00:00 AM
RequestedTargetGroupName : Windows 10
RequestedTargetGroupNames : {Windows 10}
ComputerTargetGroupIds : {59277231-1773-401f-bf44-2fe09ac02b30, a0a08746-4dbe-4a37-9adf-9e7652c0b421}
ParentServerId : 00000000-0000-0000-0000-000000000000
SyncsFromDownstreamServer : False

WSUS usually organizes machines into groups, such as all Windows 10 machines, to apply update policies. The command below measures the number of machines in a particular group called Windows 10 with the cmdlet Get-PSWSUSClientsinGroup:

Get-PSWSUSClientsInGroup -Name 'Windows 10' | Measure-Object | Select-Object -Property Count
Count
-----
86

How to manage Windows updates

With the WSUS PowerShell module, you can view, approve and decline updates on the WSUS server, a very valuable and powerful feature. The command below finds all the Windows 10 feature updates with the title “Feature update to Windows 10 (business editions).” The output shows various updates on my server for version 1903 in different languages:

Get-PSWSUSUpdate -Update "Feature update to Windows 10 (business editions)"  | Select Title
Title
-----
Feature update to Windows 10 (business editions), version 1903, en-gb x86
Feature update to Windows 10 (business editions), version 1903, en-us arm64
Feature update to Windows 10 (business editions), version 1903, en-gb arm64
Feature update to Windows 10 (business editions), version 1903, en-us x86
Feature update to Windows 10 (business editions), version 1903, en-gb x64
Feature update to Windows 10 (business editions), version 1903, en-us x64

Another great feature of this cmdlet is it shows updates that arrived after a particular date. The following command gives the top-five updates that were downloaded in the last day:

Get-PSWSUSUpdate -FromArrivalDate (Get-Date).AddDays(-1) | Select-Object -First 5
Title KnowledgebaseArticles UpdateType CreationDate UpdateID
----- --------------------- ---------- ------------ --------
Security Update for Microso... {4475607} Software 9/10/2019 10:00:00 AM 4fa99b46-765c-4224-a037-7ab...
Security Update for Microso... {4475574} Software 9/10/2019 10:00:00 AM 1e489891-3372-43d8-b262-8c8...
Security Update for Microso... {4475599} Software 9/10/2019 10:00:00 AM 76187d58-e8a6-441f-9275-702...
Security Update for Microso... {4461631} Software 9/10/2019 10:00:00 AM 86bdbd3b-7461-4214-a2ba-244...
Security Update for Microso... {4475574} Software 9/10/2019 10:00:00 AM a56d629d-8f09-498f-91e9-572...

The approval and rejection of updates is an important part of managing Windows updates in the enterprise. The WSUS PowerShell module makes this easy to do. A few years ago, Microsoft began releasing preview updates for testing purposes. I typically want to decline these updates to avoid their installation on production machines. The following command finds every update with the string “Preview of” in the title and declines them with the Deny-PSWSUSUpdate cmdlet.

Get-PSWSUSUpdate -Update "Preview of" | Where-Object {$_.IsDeclined -eq 'False' } | Deny-PSWSUSUpdate
Patch IsDeclined
----- ----------
2019-08 Preview of Quality Rollup for .NET Framework 3.5.1 on Windows Server 2008 R2 for Itanium-based Systems (KB4512193) True
2019-08 Preview of Quality Rollup for .NET Framework 3.5.1, 4.5.2, 4.6, 4.6.1, 4.6.2, 4.7, 4.7.1, 4.7.2, 4.8 for Windows 7 (KB4512193) True
2019-08 Preview of Quality Rollup for .NET Framework 3.5.1, 4.5.2, 4.6, 4.6.1, 4.6.2, 4.7, 4.7.1, 4.7.2, 4.8 for Windows 7 and Server 2008 R2 for x64 (KB4512193) True
2019-07 Preview of Quality Rollup for .NET Framework 2.0 on Windows Server 2008 SP2 for Itanium-based Systems (KB4512196) True
2019-08 Preview of Quality Rollup for .NET Framework 3.5, 4.5.2, 4.6, 4.6.1, 4.6.2, 4.7, 4.7.1, 4.7.2, 4.8 for Windows Server 2012 for x64 (KB4512194) True
2019-07 Preview of Quality Rollup for .NET Framework 2.0, 3.0, 4.5.2, 4.6 on Windows Server 2008 SP2 (KB4512196) True
2019-08 Preview of Quality Rollup for .NET Framework 3.5, 4.5.2, 4.6, 4.6.1, 4.6.2, 4.7, 4.7.1, 4.7.2, 4.8 for Windows 8.1 and Server 2012 R2 for x64 (KB4512195) True
2019-07 Preview of Quality Rollup for .NET Framework 2.0, 3.0, 4.5.2, 4.6 on Windows Server 2008 SP2 for x64 (KB4512196) True

Syncing WSUS with Microsoft’s servers

In the WSUS GUI, users can set up a daily synchronization between their WSUS server and the Microsoft update servers to download new updates. I like to synchronize more than once a day, especially on Patch Tuesday when you may get several updates in one day. For this reason, you can create a scheduled task that runs a WSUS sync hourly for a few hours per day. The script can be as simple as this command below:

Start-PSWSUSSync
Synchronization has been started on wsus.

Performing cleanups

A WSUS server can be fickle. I have had to rebuild WSUS servers several times, and it is a pretty lengthy process because you have to download all the updates to the new server. You can avoid this process by running a cleanup on the WSUS server. The Start-PSWSUSCleanup cmdlet performs many of these important actions, such as declining superseded updates, cleanup of obsolete updates and removing obsolete computers:

Start-PSWSUSCleanup -DeclineSupersededUpdates -DeclineExpiredUpdates -CleanupObsoleteUpdates -CompressUpdates -CleanupObsoleteComputers -CleanupUnneededContentFiles
Beginning cleanup, this may take some time...
SupersededUpdatesDeclined : 223
ExpiredUpdatesDeclined : 0
ObsoleteUpdatesDeleted : 0
UpdatesCompressed : 4
ObsoleteComputersDeleted : 6
DiskSpaceFreed : 57848478722

Go to Original Article
Author:

SAP HANA database pivotal to SAP’s past — and future

Twenty years ago, enterprises may have turned to SAP for back-office business enterprise software. But these days, SAP wants to be much more than that.

A big part of SAP’s strategy has to do with SAP HANA, an in-memory database the company initially released in 2010. It is now the gateway to what SAP calls the intelligent enterprise, where data is used to improve business processes and develop new business models.

The first part of this two-part series looks at how SAP, which has been around for 47 years, has transitioned from a company that focused primarily on back-office business enterprise software to one that endeavors to transform organizations into intelligent enterprises.

Broadening the scope

SAP’s story in the last 20 years has been one of continually broadening scope, according to Lloyd Adams, managing director of the East Region at SAP America Inc. He joined the company in 1998.

In the late 1990s and early 2000s, “we were known more as an ERP company — perhaps back office only,” Adams said. “But through the years, both through organic development and a combination of development and acquisition, we’ve positioned ourselves to bring the back office to the front office to help provide the intelligent enterprise.”

Anchored by SAP R/3, its pioneering client-server ERP platform, SAP entered a period of dramatic growth in the late 1990s. It rode the wave of Y2K fears, as businesses scrambled to consolidate IT on back-office ERP systems.

Joshua Greenbaum, principle, Enterprise Applications ConsultingJoshua Greenbaum

“The upgrade fever that Y2K created was really enormous and a lot of folks were pushing to use Y2K as a way to rationalize IT spending,” said Joshua Greenbaum, principal at Enterprise Applications Consulting. “Also the Euro changeover was coming, and there was a lot of interest in looking at SAP because of how it could help manage European currency changes. So those two phenomena were really operative in the late 1990s, and SAP was right at the forefront of it.”

At the same time that SAP’s ERP business was growing, however, it faced threats from the rise of internet-based business systems and on-premises best-of-breed applications like Siebel Systems, which created a popular CRM product that Oracle acquired in 2005, and Ariba, which sold a procurement product that SAP eventually acquired in 2012, according to Jon Reed, co-founder of the ERP news and analysis firm Diginomica.com.

“SAP was able to weather those storms while expanding their ERP footprint by building out a serviceable CRM module, as well as an HR module with a globalized payroll function that has stood the test of time,” Reed said. “Their core manufacturing base remained loyal and … preferred SAP’s ‘one throat to choke’ approach and extensive consulting partners.”

Not all of SAP’s efforts succeeded. Its SAP NetWeaver integration platform fell short, and the company failed to see Salesforce — or anything SaaS — coming, Reed said.

One of the main keys to SAP’s success was to encourage its customers to undergo IT and business process reengineering in the 1990s, even if it was extremely complex, according to analyst Dana Gardner, president of Interarbor Solutions LLC in Gilford, N.H.

“Such IT-instigated business change was — and is — not easy, and the stumbles by many companies to catch up to the client-server world and implement ERP were legendary,” he said. “But imagine if those companies had not made the shift to being digital businesses in the 1990s? When the web and internet hit, manual processes and nonintegrated business functions had to adapt to a connected world, so IT went from being for the business to being the whole business.”

The idea that applications and the supporting IT infrastructure work collectively using distributed yet common data and pervasive networking to provide the best information and processes is a given these days, but SAP made this possible first, Gardner said.

Milestones in SAP's 20-year journey from R/3 to the intelligent enterprise.

The SAP HANA big bang

But by the end of the 2000s, the radical new in-memory database SAP HANA was about to change SAP’s direction again.

The release of the SAP HANA database in 2010 was the critical development that allowed SAP to conceive and begin to sell the concept of the intelligent enterprise, according to Adams. If there was no HANA, there would not have been an intelligent enterprise.

Lloyd Adams, managing director, East Region at SAP America IncLloyd Adams

“It truly revolutionized the company, the industry and our ability to transcend conversations from a back-office perspective, but then be able to sit down with our customers and try and understand what were the main opportunities that they were looking to exploit or problems they were looking to solve,” he said.

The development of SAP HANA was driven in large part by the rivalry between SAP and Oracle, according to Greenbaum. The SAP ERP applications ran mostly on Oracle databases, and in the 2000s Oracle began to aggressively encroach on SAP’s territory in the enterprise software space with moves like the bitter acquisition of ERP vendor PeopleSoft.

“For SAP this was a real wake up call, because of the dependency that they had on the Oracle database,” Greenbaum said. “That realization that they needed to get out from under Oracle, along with some research that had already been going on with in-memory databases inside SAP, began the hunt for an alternative, and that’s where the HANA project started to bear fruit.”

It has been a long, slow process for SAP to move its customers off of Oracle, which is still something of a problem today, Greenbaum said. But he believes HANA is now firmly established as the database of choice for customers.

Missteps with the SAP HANA database?

However, the emphasis on the SAP HANA database might have also been a distraction that took the company away from innovating on the applications that form SAP’s core user base, according to analyst Vinnie Mirchandani, founder of Deal Architect.

Vinnie Mirchandani, analyst and founder, Deal ArchitectVinnie Mirchandani

“Every few years, SAP gets enamored with platforms and tools,” Mirchandani said. “NetWeaver and HANA, in particular, distracted the company from an application focus, without generating much revenue or market share in those segments.”

SAP was fundamentally correct that in-memory technology and real-time ERP were the ways of the future, but its push into databases with HANA is still a questionable strategy, according to Reed.

“Whether SAP should have entered the database business themselves is still open to second-guessing,” he said. “You can argue this move has distracted SAP from focusing on their homegrown and acquired cloud applications. For example, would SAP be much further ahead on SuccessFactors functionality if they hadn’t spent so much time putting SuccessFactors onto HANA?”

Buying into the cloud

SAP was slow to react to the rise of enterprise cloud computing and SaaS application like Salesforce, but it course corrected by going on a cloud application buying spree, acquiring SuccessFactors in 2011, Ariba in 2012, Hybris in 2013, Fieldglass and Concur in 2014.

Combining these cloud applications with SAP HANA “completely changed the game” for the company, Adams said.

“We eventually began to put those cloud line of business solutions on the HANA platform,” he said. “That’s given us the ability to tell a full intelligent enterprise story in ways that we weren’t fully poised to do [before HANA].”

SAP’s strategy of buying its way into the cloud has been largely successful, although efforts to move core legacy applications to the cloud have been mixed, Greenbaum said.

“SAP can claim to be one of the absolute leaders in the cloud enterprise software space,” he said. “It’s a legacy that is tempered by the fact that they’re still pulling the core legacy R/3 and ECC customers into the cloud, which has not worked out as well as SAP would like, but in terms of overall revenue and influence in the area, they’ve made their mark.”

Although SAP has proved to be adaptable to changing technologies and business trends, the future is in question. Part two of this series, will look at the release of SAP S/4HANA (the rewriting of SAP’s signature Business Suite on HANA), the emergence of the SAP intelligent enterprise, and SAP’s focus on customer experience.

Go to Original Article
Author: