Tag Archives: Cloud

Xiaomi and Microsoft sign strategic MoU in cloud, devices and AI areas

Xiaomi and Microsoft to deepen the cooperation around cloud computing, AI and hardware, in order to make Xiaomi’s products and services better fit global market

Beijing, Feb. 23, 2018 – Today, Xiaomi Corporation and Microsoft Corporation signed a Strategic Framework Memorandum of Understanding (MoU) to further deepen the partnership between the two companies. Microsoft’s globally leading technologies in cloud computing and AI will help in strengthening Xiaomi’s leadership in mobile, smart devices and services, and contribute to the acceleration of its international expansion.

Dr. Harry Shum, executive vice president of Microsoft’s Artificial Intelligence and Research Group said, “Xiaomi is one of the most innovative companies in China, and it is becoming increasingly popular in various markets around the world. Microsoft’s unique strengths and experience in AI, as well as our products like Azure, will enable Xiaomi to develop more cutting-edge technology for everyone around the world.”

Wang Xiang, Global Senior Vice President and Head of International Business, Xiaomi, said: “Microsoft has been a great partner and we are delighted to see both companies deepening this relationship with this strategic MoU. Xiaomi’s mission is to deliver innovation to everyone around the world. By collaborating with Microsoft on multiple technology areas, Xiaomi will accelerate our pace to bring more exciting products and services to our users. At the same time, this partnership would allow Microsoft to reach more users around the world who are using Xiaomi products.”

Based on the strategic MoU, Xiaomi and Microsoft’s cooperation will focus on the following aspects:

  • Cloud support: Xiaomi is in the process of rapidly expanding its user base in global market. Xiaomi and Microsoft will explore utilizing Microsoft Azure cloud platform to support Xiaomi’s user data storage, bandwidth, computing and other cloud services in international markets.
  • Laptop-type devices: Xiaomi will leverage Microsoft’s support on joint marketing, channel support, and future product development for Xiaomi’s laptop and laptop-type devices to penetrate international markets.
  • Microsoft Cortana and Mi AI Speaker: Both companies are discussing opportunities to integrate Cortana with Mi AI Speaker. Senior executives from both parties are involved to drive deeper technology integration and collaboration for AI-powered speakers, a market segment projected to grow rapidly in the next few years.
  • AI services collaboration: Xiaomi and Microsoft intend to explore multiple cooperative projects based on a broad range of Microsoft AI technologies, such as Computer Vision, Speech, Natural Language Processing, Text Input, Conversational AI, Knowledge Graph and Search, as well as related Microsoft AI products and services, such as Bing, Edge, Cortana, XiaoIce, SwiftKey, Translator, Pix, Cognitive Services and Skype. This in-depth cooperation in AI technologies and products, on top of Xiaomi’s solid experience in smart hardware, big data, and its smart device ecosystem, as well as the significant breakthroughs in core artificial intelligence technologies and products that Xiaomi has achieved, aims to generate even more synergy between hardware and software to enhance the end-user experiences on Xiaomi devices.

Founded in 2010, Xiaomi has achieved remarkable development as one of the world’s most innovative technology companies. Xiaomi’s fast-expanding product line includes but is not limited to smartphones, smart TVs, and a range of smart home products.

The cooperation between Xiaomi and Microsoft has been long-standing: since 2015, Xiaomi has adopted Microsoft Azure operated by 21Vianet in China to run its Mi Cloud service for smartphone users; in June 2016, the two companies reached a global-scale partnership, and Xiaomi began to pre-install Microsoft Office and Skype apps on its Android-based smartphones and tablets, to benefit its customers with modern workforce and communication tools; at the same time, Microsoft and Xiaomi also reached intellectual property agreements, to help Xiaomi’s product go global compliantly.

On October 31, 2017, Microsoft CEO Satya Nadella visited Xiaomi’s retail store with Lei Jun, the founder, Chairman and CEO of Xiaomi, and Satya showed great interest in Xiaomi’s products and services. The strategic MoU signed today, further expands the potential of the cooperation between the two companies, and reveals more possibilities for technological innovations, new products and breakthrough businesses in the future.

About Xiaomi

Xiaomi was founded in 2010 by serial entrepreneur Lei Jun based on the vision “innovation for everyone”. We believe that high-quality products built with cutting-edge technology should be made accessible to everyone. We create remarkable hardware, software, and Internet services for and with the help of our Mi fans. We incorporate their feedback into our product range, which currently includes Mi and Redmi smartphones, Mi TVs and set-top boxes, Mi routers, and Mi Ecosystem products including smart home products, wearables and other accessories. With presence in over 70 markets, Xiaomi is expanding its footprint across the world to become a global brand.

About Microsoft

Microsoft (Nasdaq “MSFT” @microsoft) is the leading platform and productivity company for the mobile-first, cloud-first world, and its mission is to empower every person and every organization on the planet to achieve more.

The post Xiaomi and Microsoft sign strategic MoU in cloud, devices and AI areas appeared first on Stories.

Kubernetes hybrid cloud emerges from Google-Cisco partnership

A forthcoming Kubernetes hybrid cloud option that joins products from Cisco and Google promises smoother portability and security, but at this point its distinguishing features remain theoretical.

Cisco plans to release the Cisco Container Platform (CCP) in the first quarter of 2018, with support for Kubernetes container orchestration on its HyperFlex hyper-converged infrastructure product. Sometime later this year, a second version of the container platform will link up with Google Kubernetes Engine to deliver a Kubernetes hybrid cloud offering based on the Cisco-Google partnership made public in October 2017.

“Cisco can bring a consistent hybrid cloud experience to our customers,” said Thomas Scherer, chief architect at Telindus Telecom, an IT service provider in Belgium and longtime Cisco partner that plans to offer hosted container services based on CCP. Many enterprises already use Cisco’s products, which should boost CCP’s appeal, he said.

CCP 2.0 will extend the Cisco Application Centric Infrastructure software-defined network fabric into Google’s public cloud, and enable stretched Kubernetes clusters between on-premises data centers and public clouds, Cisco executives said. Stretched clusters would enable smooth container portability between multiple infrastructures, one of the most attractive promises of Kubernetes hybrid clouds for enterprise IT shops reluctant to move everything to the cloud. CCP also will support Microsoft Azure and Amazon Web Services public clouds, and eventually CCP will incorporate DevOps monitoring tools from AppDynamics, another Cisco property.

“Today, if I have a customer that is using containers, I put them on a dedicated hosting infrastructure, because I don’t have enough confidence that I can maintain customer segregation [in a container environment],” Scherer said. “I hope that Cisco will deliver in that domain.”

He also expects that the companies’ strengths in enterprise data center and public cloud infrastructure components will give the Kubernetes hybrid cloud a unified multi-cloud dashboard with container management.

“Is it going to be easy? No, and the components included in the product may change,” he said. “But my expectation is that it will happen.”

Google public cloud servers in Georgia
Version 2 of the Cisco Container Platform will connect enterprise data centers with Google’s public cloud infrastructure shown here.

Kubernetes hybrid cloud decisions require IT unity

Cisco customers have plenty of other Kubernetes hybrid cloud choices to consider, some of which are already available. Red Hat and AWS joined forces last year to integrate Red Hat’s Kubernetes-based OpenShift Container Platform with AWS services. Microsoft has its Azure public cloud and Azure Stack for on-premises environments, and late last year added Azure Container Service Engine to Azure Stack with support for Kubernetes container management templates.

What Cisco is trying to do, along with other firms, is to expand its appeal to infrastructure and operations teams with monitoring, security and analytics features not included in Kubernetes.
Stephen Elliotanalyst, IDC

However, many enterprises continue to kick the tires on container orchestration software and most do not run containers in production, which means the Cisco-Google partnership has a window to establish itself.

“Kubernetes support is table stakes at this point,” said Stephen Elliot, analyst at IDC. “Part of what Cisco is trying to do, along with other firms, is to expand its appeal to infrastructure and operations teams with monitoring, security and analytics features not included in Kubernetes.”

As Kubernetes hybrid cloud options proliferate, enterprise IT organizations must unite traditionally separate buyers in security, application development, IT management and IT operations to evaluate and select a product. Otherwise, each constituency will be swayed by its established vendor’s product and chaos could ensue, Elliot said.

“There are a lot of moving parts, and organizations are grappling with whom in their organization to turn to for leadership,” he said. “Different buyers can’t make decisions in a vacuum anymore, and there are a lot of politics involved.”

Beth Pariseau is senior news writer for TechTarget’s Cloud and DevOps Media Group. Write to her at bpariseau@techtarget.com or follow @PariseauTT on Twitter.

Challenges in cloud data security lead to a lack of confidence

Enterprise cloud use is full of contradictions, according to new research.

The “2017 Global Cloud Data Security Study,” conducted by Ponemon Institute and sponsored by security vendor Gemalto, found that one reason enterprises use cloud services is for the security benefits, but respondents were divided on whether cloud data security is realistic, particularly for sensitive information.

“More companies are selecting cloud providers because they will improve security,” the report stated. “While cost and faster deployment time are the most important criteria for selecting a cloud provider, security has increased from 12% of respondents in 2015 to 26% in 2017.”

Although 74% of respondents said their organization is either a heavy or moderate user of the cloud, nearly half (43%) said they are not confident that their organization’s IT department knows about all the cloud computing services it currently uses.

In addition, less than half of respondents said their organization has defined roles and accountability for cloud data security. While this number (46%) was up in 2017 — from 43% in 2016 and from 38% in 2015 — it is still low, especially considering the type of information that is stored in the cloud the most.

Customer data is at the highest risk

According to the survey findings, the primary types of data stored in the cloud are customer information, email, consumer data, employee records and payment information. At the same time, the data considered to be most at risk, according to the report, is payment information and customer information.

“Regulated data such as payment and customer information continue to be most at risk,” the report stated. “Because of the sensitivity of the data and the need to comply with privacy and data protection regulations, companies worry most about payment and customer information.”

Having tracked the progress of cloud security over the years, when we say ‘confidence in the cloud is up,’ we mean that we’ve come a long way.
Jason Hartvice president and CTO of data protection, Gemalto

One possible explanation for why respondents feel that sensitive data is at risk is that cloud data security is tough to actually achieve.

“The cloud is storing all types of information from personally identifiable information to passwords to credit cards,” said Jason Hart, vice president and CTO of data protection at Gemalto. “In some cases, people don’t know where data is stored, and more importantly, how easy it is to access by unauthorized people. Most organizations don’t have data classification policies for security or consider the security risks; instead, they’re worrying about the perimeter. From a risk point of view, all data has a risk value.”

The biggest reason it is so difficult to secure the cloud, according to the study, is that it’s more difficult to apply conventional infosec practices in the cloud. The next most cited reason is that it is more difficult for enterprises to assess the cloud provider for compliance with security best practices and standards. The majority of respondents (71% and 67%, respectively) feel those are the biggest challenges, but also note that it is more difficult to control or restrict end-user access to the cloud, which also provides some security challenges.

“To solve both of these challenges, enterprises should have control and visibility over their security throughout the cloud, and being able to enforce, develop and monitor security policies is key to ensuring an integrity,” Hart said. “People will apply the appropriate controls once they’re able to understand the risks towards their data.”

Despite the challenges in cloud data security and the perceived security risks to sensitive data stored in the cloud, all-around confidence in cloud computing is on the rise — slightly. The 25% of respondents who said they are “very confident” their organization knows about all the cloud computing services it currently uses is up from 19% in 2015. Fewer people (43%) said they were “not confident” in 2017 compared to 55% in 2015.

“Having tracked the progress of cloud security over the years, when we say ‘confidence in the cloud is up,’ we mean that we’ve come a long way,” Hart said. “After all, in the beginning, many companies were interested in leveraging the cloud but had significant concerns about security.”

Hart noted that, despite all the improvements to business workflows that the cloud has provided, security is still an issue. “Security has always been a concern and continues to be,” he said. “Security in the cloud can be improved if the security control is applied to the data itself.”

Ponemon sampled over 3,200 experienced IT and IT security practitioners in the U.S., the United Kingdom, Australia, Germany, France, Japan, India and Brazil who are actively involved in their organization’s use of cloud services.

In 2018, a better, faster, more accessible cloud emerges

Here’s what’s new in the Microsoft Cloud: Microsoft is making it easier for developers to build great apps that take advantage of the latest analytics capabilities with free developer tools and languages, best-practice guidance, price reductions, and new features.

Better decisions through better analytics

Knowing how users interact with your apps is a critical first step in managing product strategy and development pipeline. Using robust analytics, you can get the immediate feedback you need to determine how to engage users and make better decisions to improve your apps. With Visual Studio App Center, you can access App Center Analytics completely free. Now you can use this tool with Azure Application Insights to improve your business. Get started today.

New tools speed app development using time series data

Integrating IoT with other real-time applications can be a complex challenge. With Time Series Insights (TSI), developers can build applications that give valuable insights to customers, take fine-grain control over time series data, and easily plug TSI into a broader workflow or technology stack. To help developers get started and shorten development cycles, Microsoft has released new Azure Time Series Insights developer tools. With these tools, developers can more easily embed TSI’s platform into apps to power charts and graphs, compare data from different points in time, and dynamically explore data trends and correlations.

Faster feedback drives better apps

Good intuition is important, but without user input and insights you are playing a potentially costly guessing game. Gathering feedback fast from beta users who are invested in your product’s success lets you learn and adapt quickly before getting too deep into code that’s expensive to correct later. Using this step-by-step guide from one of our Visual Studio App Center customers, you will learn how to swiftly gather quantitative and qualitative user feedback to build apps your customers love, anticipate and correct problems, and ultimately win customers’ loyalty.

Empowering data scientists with R updates

R, an open-source statistical programming language, empowers data scientists to drive insightful analytics, statistics, and visualizations for mapping social and marketing trends, developing scientific and financial models, and anticipating consumer behavior. Recently we’ve released Microsoft R Open 3.4.3, the latest version of Microsoft’s enhanced distribution of R. This free download includes the latest R language engine, compatibility, and additional capabilities for performance, reproducibility, and platform support.

New open-source analytics capabilities at a lower cost

Microsoft recently announced significant price reductions, along with new abilities for Azure HDInsight, the open-source analytics cloud service that developers can implement in a wide range of mission-critical applications, including machine learning, IoT, and more. This includes capabilities like Apache Kafka on Azure HDInsight and Azure Log Analytics integration, previews for Enterprise Security Package for Azure HDInsight, and integration with Power BI direct query.

We are constantly creating new tools and features that reduce time-to-market and allow developers to do their best work. To stay up to date on Microsoft’s work in the cloud, visit https://cloudblogs.microsoft.com.

Kubernetes storage projects dominate CNCF docket

Enterprise IT pros should get ready for Kubernetes storage tools, as the Cloud Native Computing Foundation seeks ways to support stateful applications.

The Cloud Native Computing Foundation (CNCF) began its quest to develop container storage products this week when it approved an inception-level project called Rook, which connects Kubernetes orchestration to the Ceph distributed file system through the Kubernetes operator API.

The Rook project’s approval illustrates the CNCF’s plans to emphasize Kubernetes storage.

“It’s going to be a big year for storage in Kubernetes, because the APIs are a little bit more solidified now,” said CNCF COO Chris Aniszczyk. The operator API and a Container Storage Interface API were released in the alpha stage with Kubernetes 1.9 in December. “[The CNCF technical board is] saying that the Kubernetes operator API is the way to go in [distributed container] storage,” he said.

Rook project gave Prometheus a seat on HBO’s Iron Throne

HBO wanted to deploy Prometheus for Kubernetes monitoring, and it ideally would have run the time-series database application on containers within the Kubernetes cluster, but that didn’t work well with cloud providers’ persistent storage volumes.

Illya Chekrygin, UpboundIllya Chekrygin

“You always have to do this careful coordination to make sure new containers only get created in the same availability zone. And if that entire availability zone goes away, you’re kind of out of luck,” said Illya Chekrygin, who directed HBO’s implementation of containers as a senior staff engineer in 2017. “That was a painful experience in terms of synchronization.”

Moreover, when containers that ran stateful apps were killed and restarted in different nodes of the Kubernetes cluster, it took too long to unmount, release and remount their attached storage volumes, Chekrygin said.

Rook was an early conceptual project in GitHub at that time, but HBO engineers put it into a test environment to support Prometheus. Rook uses a storage overlay that runs within the Kubernetes cluster and configures the cluster nodes’ available disk space as a giant pool of resources, which is in line with how Kubernetes handles CPU and memory resources.

Rather than synchronize data across multiple specific storage volumes or locations, Rook uses the Ceph distributed file system to stripe the data across multiple machines and clusters and to create multiple copies of data for high availability. That overcomes the data synchronization problem, and it avoids the need to unmount and remount external storage volumes.

“It’s using existing cluster disk configurations that are already there, so nothing has to be mounted and unmounted,” Chekrygin said. “You avoid external storage resources to begin with.”

At HBO, a mounting and unmounting process that took up to an hour was reduced to two seconds, which was suitable for the Kubernetes monitoring system in Prometheus that scraped telemetry data from the cluster every 10 to 30 seconds.

However, Rook never saw production use at HBO, which, by policy, doesn’t put prerelease software into production. Instead, Chekrygin and his colleagues set up an external Prometheus instance that received a relay of monitoring data from an agent inside the Kubernetes cluster. That worked, but it required an extra network hop for data and made Prometheus management more complex.

“Kubernetes provides a lot of functionality out of the box, such as automatically restarting your Pod if your Pod dies, automatic scaling and service discovery,” Chekrygin said. “If you run a service somewhere else, it’s your responsibility on your own to do all those things.”

Kubernetes storage in the spotlight

Kubernetes is ill-equipped to handle data storage persistence … this is the next frontier and the next biggest thing.
Illya Chekryginfounding member, Upbound

The CNCF is aware of the difficulty organizations face when they try to run stateful applications on Kubernetes. As of this week, it now owns the intellectual property and trademarks for Rook, which currently lists Quantum Corp. and Upbound, a startup in Seattle founded by Rook’s creator, Bassam Tabbara, as contributors to its open source code. As an inception-level project, Rook isn’t a sure thing, more akin to a bet on an early stage idea. It has about a 50-50 chance of panning out, CNCF’s Aniszczyk said.

Inception-level projects must update their presentations to the technical board once a year to continue as part of CNCF. From the inception level, projects may move to incubation, which means they’ve collected multiple corporate contributors and established a code of conduct and governance procedures, among other criteria. From incubation, projects then move to the graduated stage, although the CNCF has yet to even designate Kubernetes itself a graduated project. Kubernetes and Prometheus are expected to graduate this year, Aniszczyk said.

The upshot for container orchestration users is Rook will be governed by the same rules and foundation as Kubernetes itself, rather than held hostage by a single for-profit company. The CNCF could potentially support more than one project similar to Rook, such as Red Hat’s Gluster-based Container Native Storage Platform, and Aniszczyk said those companies are welcome to present them to the CNCF technical board.

Another Kubernetes storage project that may find its way into the CNCF, and potentially complement Rook, was open-sourced by container storage software maker Portworx this week. The Storage Orchestrator Runtime for Kubernetes (STORK) uses the Kubernetes orchestrator to automate operations within storage layers such as Rook to respond to applications’ needs. However, STORK needs more development before it is submitted to the CNCF, said Gou Rao, founder and CEO at Portworx, based in Los Altos, Calif.

Kubernetes storage seems like a worthy bet to Chekrygin, who left his three-year job with HBO this month to take a position as an engineer at Upbound.

“Kubernetes is ill-equipped to handle data storage persistence,” he said. “I’m so convinced that this is the next frontier and the next biggest thing, I was willing to quit my job.”

Beth Pariseau is senior news writer for TechTarget’s Cloud and DevOps Media Group. Write to her at bpariseau@techtarget.com or follow @PariseauTT on Twitter.

Announcing the general availability of Azure Event Grid

Modern applications are taking maximum advantage of the agility and flexibility of the cloud by moving away from monolithic architectures and instead using a set of distinct services, all working together. This includes foundational services offered by a cloud platform like Azure (Database, Storage, IoT, Compute, Serverless Functions, etc.) and application-specific services (inventory management, payment services, manufacturing processes, mobile experiences, etc.). In these new architectures, event-driven execution has become a foundational cornerstone. It replaces cumbersome polling for communication between services with a simple mechanism. These events could include IoT device signals, cloud provisioning notifications, storage blob events, or even custom scenarios such as new employees being added to HR systems. Reacting to such events efficiently and reliably is critical in these new app paradigms.

Today, I am excited to announce the general availability of Azure Event Grid, a fully managed event routing service that simplifies the development of event-based applications.

  • Azure Event Grid is the first of its kind, enabling applications and services to subscribe to all the events they need to handle whether they come from Azure services or from other parts of the same application.
  • These events are delivered through push semantics, simplifying your code and reducing your resource consumption. You no longer need to continuously poll for changes and you only pay per event. The service automatically scales dynamically to handle millions of events per second.
  • Azure Event Grid provides multiple ways to react to these events including using Serverless offerings such as Azure Functions or Azure Logic Apps, using Azure Automation, or even custom web hooks for your code or 3rd party services. This means any service running anywhere can publish events and subscribe to reliable Azure Events.

We make it easy to react to Azure native events and build modern apps anywhere, on-premises and cloud, without restricting you to use only our public cloud services. This is unique to Azure Event Grid.

Here is how it works:

[embedded content]

In the days since we announced public preview, we have seen many customers find innovative uses for Azure Event Grid and we’ve been blown away by all the great feedback from customers and the community. 

  • Qutotec used Azure Event Grid to rearchitect their hybrid integration platform:

“Azure Event Grid enabled us to simplify the architecture of our cloud-based enterprise wide hybrid integration platform, by making it easy to reliably respond to events and changes in the global business data without polling.”

– Henri Syrjäläinen, Director of Digital Enterprise Architecture, Outotec Oyj

  • Paycor unified their human capital management applications using Azure Event Grid:

“Event Grid empowers Paycor to provide a unified experience to our customers, across the suite of our human capital management applications.  It becomes the backbone for an event driven architecture, allowing each application to broadcast and receive events in a safe, reliable way.  It solves many of the operational and scalability concerns that traditional pub-sub solutions cannot.”

– Anthony Your, Director of Architecture, Paycor, Inc.

  • Microsoft Devices supply chain team utilized Azure Event Grid as part of its serverless pipeline to optimize operations and reduce time to market. The details are described in this Microsoft supply chain serverless case study.

Here is what we have newly available since our preview:

  • Richer scenarios enabled through integration with more services: Since preview, we have added General Purpose Storage and Azure IoT Hub as new event publishers and Azure Event Hubs as a new destination (great for event archival, streaming, and buffering of events). IoT Hub adds support for device lifecycle events such as device creation and device deletion which can then be handled in a serverless manner. These new integrations simplify the architecture and expand the possibilities for your applications whether they are in cloud or on-premises. Please see the full current list of Azure Event Grid service integrations for details and the region-wise availabilities. We will continue to add more services throughout the year. 

Event Grid service integrations

  • Availability in more regions: Azure Event Grid is globally available in the following regions: West US, East US, West US 2, East US 2, West Central US, Central US, West Europe, North Europe, Southeast Asia, and East Asia with more coming soon.
  • Increased reliability and service level agreement (SLA): We now have a 24 hour retry policy with exponential back off for event delivery. We also offer an industry-leading 99.99% availability with a financially backed SLA for your production workloads. With today’s announcement, you can confidently build your business-critical applications to rely on Azure Event Grid.
  • Better developer productivity: Today, we are also releasing new Event Grid SDKs to streamline development. Management SDKs are now available for Python, .Net, and Node.js with support for Go, Ruby, and Java coming soon. Publish SDK is now available for .Net with support for Python, Node.js, Go, Ruby, and Java coming soon. Additionally, we have now made it easier to consume events by simply fetching the JSON schema of all supported event types from our event schema store. This removes the burden of the subscriber to understand and de-serialize the events.

With today’s GA, I think you will find that Azure Event Grid becomes a critical component in your serverless application. Go ahead, give it a try with this simple and fun Event Grid Quickstart. Remember, the first 100,000 events per month are on us!

Here are some other samples/tutorials to help you get started:

  • Build serverless applications
    • Use IoT Hub and Logic apps to react to device lifecycle events [doc | video]
    • Instantly pick up and resize images in Blob Storage using a function [doc]
  • Automate your infrastructure operations
    • Appropriately tag VMs as they are spun up and send a notification to your Microsoft Teams channel [doc]
  • Facilitate communication between the different pieces of your distributed applications
    • Stream data from Event Hubs to your data warehouse [doc]

To learn more, please join us for our upcoming webinar on Tuesday, February 13, 2018. 

Register here: Building event-driven applications using serverless architectures.



SAP offers extra help on HR cloud migrations

SAP recently launched a program that offers services and tools to help with an HR cloud migration. The intent is to help HR managers make a business case and to ease some of the initial integration steps.

 SAP has seen rapid growth of its SuccessFactors cloud human capital management platform. But the firm has some 14,000 users of its older on-premises HCM suite, mostly in Europe, who have not fully migrated. Some are in a hybrid model and have been using parts of SuccessFactors.

Customers may feel “a lot of trepidation” over the initial HR cloud migration steps, said Stephen Spears, chief revenue officer at SAP. He said SAP is trying to prove with its new Upgrade2Success program “that it’s not difficult to go from their existing HR, on-premises environment to the cloud.”

The problems that stand in the way of an HR cloud migration may be complicated, especially in Europe.

HR investment remains strong

The time may be right for SAP to accelerate its cloud adoption efforts. HR spending remains strong, said analysts, and users are shifting work to HR cloud platforms.

If I were a cloud HR systems provider, I would be very excited for the future, at least in North America.
David Wagnervice president of research, Computer Economics

IDC said HCM applications are forecast to generate just over $15 billion in revenues globally this year, up 8% over 2017. This does not include payroll, just HCM applications, which address core HR functions such as personnel records, benefits administration and workforce management.

The estimated 2018 growth rate is a bit below prior year-over-year growth, which was 9% to 10%, “but still quite strong versus other back office application areas,” said Lisa Rowan, an IDC analyst. Growth is being driven in part by strong interest in replacing older on-premises core HR systems with SaaS-based systems, she said.

Cloud adoption for HR is strong in U.S.

Computer Economics, a research and consulting firm, said that in terms of organizational spending priorities HR is “right down the middle” of the 14 technologies it tracks, said David Wagner, vice president of research at the firm. It surveyed 220 companies ranging from $50 million to multibillion-dollar firms.

“Investment is higher in everything this year,” Wagner said, but IT operational budgets are not going up very fast and the reason is the cloud transition. Organizations are converting legacy systems to cloud systems and investing the savings back into the IT budget. “They’re converting to the cloud as fast as is reasonable in organizations right now,” he said.

“If I were a cloud HR systems provider, I would be very excited for the future, at least in North America,” Wagner said.

Cloud adoption different story in Europe

But Europe, where SAP has about 80% of its on-premises users, may be a different story.

Wagner, speaking generally and not specific to SAP, said the problem with cloud adoption in Europe is that there are much more stringent compliance rules around data in the cloud. There’s a lot of concern about data crossing borders and where it’s stored, and how it’s stored and encrypted. “Cloud adoption in general in Europe is behind North America because of those rules,” he said.

SAP’s new cloud adoption program brings together some services and updated tools that help customers make a business case, demonstrate the ROI and help with data integration. It takes on some of the work that a systems integrator might do.

Charles King, an analyst at Pund-IT, said SAP is aiming to reduce the risk and uncertainties involved in a sizable project. 

“That’s a wise move since cost, risk and uncertainty is the unholy trinity of bugaboos that plague organizations contemplating such substantial changes,” King said.