Tag Archives: Cloud

Kubernetes hybrid cloud emerges from Google-Cisco partnership

A forthcoming Kubernetes hybrid cloud option that joins products from Cisco and Google promises smoother portability and security, but at this point its distinguishing features remain theoretical.

Cisco plans to release the Cisco Container Platform (CCP) in the first quarter of 2018, with support for Kubernetes container orchestration on its HyperFlex hyper-converged infrastructure product. Sometime later this year, a second version of the container platform will link up with Google Kubernetes Engine to deliver a Kubernetes hybrid cloud offering based on the Cisco-Google partnership made public in October 2017.

“Cisco can bring a consistent hybrid cloud experience to our customers,” said Thomas Scherer, chief architect at Telindus Telecom, an IT service provider in Belgium and longtime Cisco partner that plans to offer hosted container services based on CCP. Many enterprises already use Cisco’s products, which should boost CCP’s appeal, he said.

CCP 2.0 will extend the Cisco Application Centric Infrastructure software-defined network fabric into Google’s public cloud, and enable stretched Kubernetes clusters between on-premises data centers and public clouds, Cisco executives said. Stretched clusters would enable smooth container portability between multiple infrastructures, one of the most attractive promises of Kubernetes hybrid clouds for enterprise IT shops reluctant to move everything to the cloud. CCP also will support Microsoft Azure and Amazon Web Services public clouds, and eventually CCP will incorporate DevOps monitoring tools from AppDynamics, another Cisco property.

“Today, if I have a customer that is using containers, I put them on a dedicated hosting infrastructure, because I don’t have enough confidence that I can maintain customer segregation [in a container environment],” Scherer said. “I hope that Cisco will deliver in that domain.”

He also expects that the companies’ strengths in enterprise data center and public cloud infrastructure components will give the Kubernetes hybrid cloud a unified multi-cloud dashboard with container management.

“Is it going to be easy? No, and the components included in the product may change,” he said. “But my expectation is that it will happen.”

Google public cloud servers in Georgia
Version 2 of the Cisco Container Platform will connect enterprise data centers with Google’s public cloud infrastructure shown here.

Kubernetes hybrid cloud decisions require IT unity

Cisco customers have plenty of other Kubernetes hybrid cloud choices to consider, some of which are already available. Red Hat and AWS joined forces last year to integrate Red Hat’s Kubernetes-based OpenShift Container Platform with AWS services. Microsoft has its Azure public cloud and Azure Stack for on-premises environments, and late last year added Azure Container Service Engine to Azure Stack with support for Kubernetes container management templates.

What Cisco is trying to do, along with other firms, is to expand its appeal to infrastructure and operations teams with monitoring, security and analytics features not included in Kubernetes.
Stephen Elliotanalyst, IDC

However, many enterprises continue to kick the tires on container orchestration software and most do not run containers in production, which means the Cisco-Google partnership has a window to establish itself.

“Kubernetes support is table stakes at this point,” said Stephen Elliot, analyst at IDC. “Part of what Cisco is trying to do, along with other firms, is to expand its appeal to infrastructure and operations teams with monitoring, security and analytics features not included in Kubernetes.”

As Kubernetes hybrid cloud options proliferate, enterprise IT organizations must unite traditionally separate buyers in security, application development, IT management and IT operations to evaluate and select a product. Otherwise, each constituency will be swayed by its established vendor’s product and chaos could ensue, Elliot said.

“There are a lot of moving parts, and organizations are grappling with whom in their organization to turn to for leadership,” he said. “Different buyers can’t make decisions in a vacuum anymore, and there are a lot of politics involved.”

Beth Pariseau is senior news writer for TechTarget’s Cloud and DevOps Media Group. Write to her at bpariseau@techtarget.com or follow @PariseauTT on Twitter.

Challenges in cloud data security lead to a lack of confidence

Enterprise cloud use is full of contradictions, according to new research.

The “2017 Global Cloud Data Security Study,” conducted by Ponemon Institute and sponsored by security vendor Gemalto, found that one reason enterprises use cloud services is for the security benefits, but respondents were divided on whether cloud data security is realistic, particularly for sensitive information.

“More companies are selecting cloud providers because they will improve security,” the report stated. “While cost and faster deployment time are the most important criteria for selecting a cloud provider, security has increased from 12% of respondents in 2015 to 26% in 2017.”

Although 74% of respondents said their organization is either a heavy or moderate user of the cloud, nearly half (43%) said they are not confident that their organization’s IT department knows about all the cloud computing services it currently uses.

In addition, less than half of respondents said their organization has defined roles and accountability for cloud data security. While this number (46%) was up in 2017 — from 43% in 2016 and from 38% in 2015 — it is still low, especially considering the type of information that is stored in the cloud the most.

Customer data is at the highest risk

According to the survey findings, the primary types of data stored in the cloud are customer information, email, consumer data, employee records and payment information. At the same time, the data considered to be most at risk, according to the report, is payment information and customer information.

“Regulated data such as payment and customer information continue to be most at risk,” the report stated. “Because of the sensitivity of the data and the need to comply with privacy and data protection regulations, companies worry most about payment and customer information.”

Having tracked the progress of cloud security over the years, when we say ‘confidence in the cloud is up,’ we mean that we’ve come a long way.
Jason Hartvice president and CTO of data protection, Gemalto

One possible explanation for why respondents feel that sensitive data is at risk is that cloud data security is tough to actually achieve.

“The cloud is storing all types of information from personally identifiable information to passwords to credit cards,” said Jason Hart, vice president and CTO of data protection at Gemalto. “In some cases, people don’t know where data is stored, and more importantly, how easy it is to access by unauthorized people. Most organizations don’t have data classification policies for security or consider the security risks; instead, they’re worrying about the perimeter. From a risk point of view, all data has a risk value.”

The biggest reason it is so difficult to secure the cloud, according to the study, is that it’s more difficult to apply conventional infosec practices in the cloud. The next most cited reason is that it is more difficult for enterprises to assess the cloud provider for compliance with security best practices and standards. The majority of respondents (71% and 67%, respectively) feel those are the biggest challenges, but also note that it is more difficult to control or restrict end-user access to the cloud, which also provides some security challenges.

“To solve both of these challenges, enterprises should have control and visibility over their security throughout the cloud, and being able to enforce, develop and monitor security policies is key to ensuring an integrity,” Hart said. “People will apply the appropriate controls once they’re able to understand the risks towards their data.”

Despite the challenges in cloud data security and the perceived security risks to sensitive data stored in the cloud, all-around confidence in cloud computing is on the rise — slightly. The 25% of respondents who said they are “very confident” their organization knows about all the cloud computing services it currently uses is up from 19% in 2015. Fewer people (43%) said they were “not confident” in 2017 compared to 55% in 2015.

“Having tracked the progress of cloud security over the years, when we say ‘confidence in the cloud is up,’ we mean that we’ve come a long way,” Hart said. “After all, in the beginning, many companies were interested in leveraging the cloud but had significant concerns about security.”

Hart noted that, despite all the improvements to business workflows that the cloud has provided, security is still an issue. “Security has always been a concern and continues to be,” he said. “Security in the cloud can be improved if the security control is applied to the data itself.”

Ponemon sampled over 3,200 experienced IT and IT security practitioners in the U.S., the United Kingdom, Australia, Germany, France, Japan, India and Brazil who are actively involved in their organization’s use of cloud services.

In 2018, a better, faster, more accessible cloud emerges

Here’s what’s new in the Microsoft Cloud: Microsoft is making it easier for developers to build great apps that take advantage of the latest analytics capabilities with free developer tools and languages, best-practice guidance, price reductions, and new features.

Better decisions through better analytics

Knowing how users interact with your apps is a critical first step in managing product strategy and development pipeline. Using robust analytics, you can get the immediate feedback you need to determine how to engage users and make better decisions to improve your apps. With Visual Studio App Center, you can access App Center Analytics completely free. Now you can use this tool with Azure Application Insights to improve your business. Get started today.

New tools speed app development using time series data

Integrating IoT with other real-time applications can be a complex challenge. With Time Series Insights (TSI), developers can build applications that give valuable insights to customers, take fine-grain control over time series data, and easily plug TSI into a broader workflow or technology stack. To help developers get started and shorten development cycles, Microsoft has released new Azure Time Series Insights developer tools. With these tools, developers can more easily embed TSI’s platform into apps to power charts and graphs, compare data from different points in time, and dynamically explore data trends and correlations.

Faster feedback drives better apps

Good intuition is important, but without user input and insights you are playing a potentially costly guessing game. Gathering feedback fast from beta users who are invested in your product’s success lets you learn and adapt quickly before getting too deep into code that’s expensive to correct later. Using this step-by-step guide from one of our Visual Studio App Center customers, you will learn how to swiftly gather quantitative and qualitative user feedback to build apps your customers love, anticipate and correct problems, and ultimately win customers’ loyalty.

Empowering data scientists with R updates

R, an open-source statistical programming language, empowers data scientists to drive insightful analytics, statistics, and visualizations for mapping social and marketing trends, developing scientific and financial models, and anticipating consumer behavior. Recently we’ve released Microsoft R Open 3.4.3, the latest version of Microsoft’s enhanced distribution of R. This free download includes the latest R language engine, compatibility, and additional capabilities for performance, reproducibility, and platform support.

New open-source analytics capabilities at a lower cost

Microsoft recently announced significant price reductions, along with new abilities for Azure HDInsight, the open-source analytics cloud service that developers can implement in a wide range of mission-critical applications, including machine learning, IoT, and more. This includes capabilities like Apache Kafka on Azure HDInsight and Azure Log Analytics integration, previews for Enterprise Security Package for Azure HDInsight, and integration with Power BI direct query.

We are constantly creating new tools and features that reduce time-to-market and allow developers to do their best work. To stay up to date on Microsoft’s work in the cloud, visit https://cloudblogs.microsoft.com.

Kubernetes storage projects dominate CNCF docket

Enterprise IT pros should get ready for Kubernetes storage tools, as the Cloud Native Computing Foundation seeks ways to support stateful applications.

The Cloud Native Computing Foundation (CNCF) began its quest to develop container storage products this week when it approved an inception-level project called Rook, which connects Kubernetes orchestration to the Ceph distributed file system through the Kubernetes operator API.

The Rook project’s approval illustrates the CNCF’s plans to emphasize Kubernetes storage.

“It’s going to be a big year for storage in Kubernetes, because the APIs are a little bit more solidified now,” said CNCF COO Chris Aniszczyk. The operator API and a Container Storage Interface API were released in the alpha stage with Kubernetes 1.9 in December. “[The CNCF technical board is] saying that the Kubernetes operator API is the way to go in [distributed container] storage,” he said.

Rook project gave Prometheus a seat on HBO’s Iron Throne

HBO wanted to deploy Prometheus for Kubernetes monitoring, and it ideally would have run the time-series database application on containers within the Kubernetes cluster, but that didn’t work well with cloud providers’ persistent storage volumes.

Illya Chekrygin, UpboundIllya Chekrygin

“You always have to do this careful coordination to make sure new containers only get created in the same availability zone. And if that entire availability zone goes away, you’re kind of out of luck,” said Illya Chekrygin, who directed HBO’s implementation of containers as a senior staff engineer in 2017. “That was a painful experience in terms of synchronization.”

Moreover, when containers that ran stateful apps were killed and restarted in different nodes of the Kubernetes cluster, it took too long to unmount, release and remount their attached storage volumes, Chekrygin said.

Rook was an early conceptual project in GitHub at that time, but HBO engineers put it into a test environment to support Prometheus. Rook uses a storage overlay that runs within the Kubernetes cluster and configures the cluster nodes’ available disk space as a giant pool of resources, which is in line with how Kubernetes handles CPU and memory resources.

Rather than synchronize data across multiple specific storage volumes or locations, Rook uses the Ceph distributed file system to stripe the data across multiple machines and clusters and to create multiple copies of data for high availability. That overcomes the data synchronization problem, and it avoids the need to unmount and remount external storage volumes.

“It’s using existing cluster disk configurations that are already there, so nothing has to be mounted and unmounted,” Chekrygin said. “You avoid external storage resources to begin with.”

At HBO, a mounting and unmounting process that took up to an hour was reduced to two seconds, which was suitable for the Kubernetes monitoring system in Prometheus that scraped telemetry data from the cluster every 10 to 30 seconds.

However, Rook never saw production use at HBO, which, by policy, doesn’t put prerelease software into production. Instead, Chekrygin and his colleagues set up an external Prometheus instance that received a relay of monitoring data from an agent inside the Kubernetes cluster. That worked, but it required an extra network hop for data and made Prometheus management more complex.

“Kubernetes provides a lot of functionality out of the box, such as automatically restarting your Pod if your Pod dies, automatic scaling and service discovery,” Chekrygin said. “If you run a service somewhere else, it’s your responsibility on your own to do all those things.”

Kubernetes storage in the spotlight

Kubernetes is ill-equipped to handle data storage persistence … this is the next frontier and the next biggest thing.
Illya Chekryginfounding member, Upbound

The CNCF is aware of the difficulty organizations face when they try to run stateful applications on Kubernetes. As of this week, it now owns the intellectual property and trademarks for Rook, which currently lists Quantum Corp. and Upbound, a startup in Seattle founded by Rook’s creator, Bassam Tabbara, as contributors to its open source code. As an inception-level project, Rook isn’t a sure thing, more akin to a bet on an early stage idea. It has about a 50-50 chance of panning out, CNCF’s Aniszczyk said.

Inception-level projects must update their presentations to the technical board once a year to continue as part of CNCF. From the inception level, projects may move to incubation, which means they’ve collected multiple corporate contributors and established a code of conduct and governance procedures, among other criteria. From incubation, projects then move to the graduated stage, although the CNCF has yet to even designate Kubernetes itself a graduated project. Kubernetes and Prometheus are expected to graduate this year, Aniszczyk said.

The upshot for container orchestration users is Rook will be governed by the same rules and foundation as Kubernetes itself, rather than held hostage by a single for-profit company. The CNCF could potentially support more than one project similar to Rook, such as Red Hat’s Gluster-based Container Native Storage Platform, and Aniszczyk said those companies are welcome to present them to the CNCF technical board.

Another Kubernetes storage project that may find its way into the CNCF, and potentially complement Rook, was open-sourced by container storage software maker Portworx this week. The Storage Orchestrator Runtime for Kubernetes (STORK) uses the Kubernetes orchestrator to automate operations within storage layers such as Rook to respond to applications’ needs. However, STORK needs more development before it is submitted to the CNCF, said Gou Rao, founder and CEO at Portworx, based in Los Altos, Calif.

Kubernetes storage seems like a worthy bet to Chekrygin, who left his three-year job with HBO this month to take a position as an engineer at Upbound.

“Kubernetes is ill-equipped to handle data storage persistence,” he said. “I’m so convinced that this is the next frontier and the next biggest thing, I was willing to quit my job.”

Beth Pariseau is senior news writer for TechTarget’s Cloud and DevOps Media Group. Write to her at bpariseau@techtarget.com or follow @PariseauTT on Twitter.

Announcing the general availability of Azure Event Grid

Modern applications are taking maximum advantage of the agility and flexibility of the cloud by moving away from monolithic architectures and instead using a set of distinct services, all working together. This includes foundational services offered by a cloud platform like Azure (Database, Storage, IoT, Compute, Serverless Functions, etc.) and application-specific services (inventory management, payment services, manufacturing processes, mobile experiences, etc.). In these new architectures, event-driven execution has become a foundational cornerstone. It replaces cumbersome polling for communication between services with a simple mechanism. These events could include IoT device signals, cloud provisioning notifications, storage blob events, or even custom scenarios such as new employees being added to HR systems. Reacting to such events efficiently and reliably is critical in these new app paradigms.

Today, I am excited to announce the general availability of Azure Event Grid, a fully managed event routing service that simplifies the development of event-based applications.

  • Azure Event Grid is the first of its kind, enabling applications and services to subscribe to all the events they need to handle whether they come from Azure services or from other parts of the same application.
  • These events are delivered through push semantics, simplifying your code and reducing your resource consumption. You no longer need to continuously poll for changes and you only pay per event. The service automatically scales dynamically to handle millions of events per second.
  • Azure Event Grid provides multiple ways to react to these events including using Serverless offerings such as Azure Functions or Azure Logic Apps, using Azure Automation, or even custom web hooks for your code or 3rd party services. This means any service running anywhere can publish events and subscribe to reliable Azure Events.

We make it easy to react to Azure native events and build modern apps anywhere, on-premises and cloud, without restricting you to use only our public cloud services. This is unique to Azure Event Grid.

Here is how it works:

[embedded content]

In the days since we announced public preview, we have seen many customers find innovative uses for Azure Event Grid and we’ve been blown away by all the great feedback from customers and the community. 

  • Qutotec used Azure Event Grid to rearchitect their hybrid integration platform:

“Azure Event Grid enabled us to simplify the architecture of our cloud-based enterprise wide hybrid integration platform, by making it easy to reliably respond to events and changes in the global business data without polling.”

– Henri Syrjäläinen, Director of Digital Enterprise Architecture, Outotec Oyj

  • Paycor unified their human capital management applications using Azure Event Grid:

“Event Grid empowers Paycor to provide a unified experience to our customers, across the suite of our human capital management applications.  It becomes the backbone for an event driven architecture, allowing each application to broadcast and receive events in a safe, reliable way.  It solves many of the operational and scalability concerns that traditional pub-sub solutions cannot.”

– Anthony Your, Director of Architecture, Paycor, Inc.

  • Microsoft Devices supply chain team utilized Azure Event Grid as part of its serverless pipeline to optimize operations and reduce time to market. The details are described in this Microsoft supply chain serverless case study.

Here is what we have newly available since our preview:

  • Richer scenarios enabled through integration with more services: Since preview, we have added General Purpose Storage and Azure IoT Hub as new event publishers and Azure Event Hubs as a new destination (great for event archival, streaming, and buffering of events). IoT Hub adds support for device lifecycle events such as device creation and device deletion which can then be handled in a serverless manner. These new integrations simplify the architecture and expand the possibilities for your applications whether they are in cloud or on-premises. Please see the full current list of Azure Event Grid service integrations for details and the region-wise availabilities. We will continue to add more services throughout the year. 

Event Grid service integrations

  • Availability in more regions: Azure Event Grid is globally available in the following regions: West US, East US, West US 2, East US 2, West Central US, Central US, West Europe, North Europe, Southeast Asia, and East Asia with more coming soon.
  • Increased reliability and service level agreement (SLA): We now have a 24 hour retry policy with exponential back off for event delivery. We also offer an industry-leading 99.99% availability with a financially backed SLA for your production workloads. With today’s announcement, you can confidently build your business-critical applications to rely on Azure Event Grid.
  • Better developer productivity: Today, we are also releasing new Event Grid SDKs to streamline development. Management SDKs are now available for Python, .Net, and Node.js with support for Go, Ruby, and Java coming soon. Publish SDK is now available for .Net with support for Python, Node.js, Go, Ruby, and Java coming soon. Additionally, we have now made it easier to consume events by simply fetching the JSON schema of all supported event types from our event schema store. This removes the burden of the subscriber to understand and de-serialize the events.

With today’s GA, I think you will find that Azure Event Grid becomes a critical component in your serverless application. Go ahead, give it a try with this simple and fun Event Grid Quickstart. Remember, the first 100,000 events per month are on us!

Here are some other samples/tutorials to help you get started:

  • Build serverless applications
    • Use IoT Hub and Logic apps to react to device lifecycle events [doc | video]
    • Instantly pick up and resize images in Blob Storage using a function [doc]
  • Automate your infrastructure operations
    • Appropriately tag VMs as they are spun up and send a notification to your Microsoft Teams channel [doc]
  • Facilitate communication between the different pieces of your distributed applications
    • Stream data from Event Hubs to your data warehouse [doc]

To learn more, please join us for our upcoming webinar on Tuesday, February 13, 2018. 

Register here: Building event-driven applications using serverless architectures.

Thanks,

Corey

SAP offers extra help on HR cloud migrations

SAP recently launched a program that offers services and tools to help with an HR cloud migration. The intent is to help HR managers make a business case and to ease some of the initial integration steps.

 SAP has seen rapid growth of its SuccessFactors cloud human capital management platform. But the firm has some 14,000 users of its older on-premises HCM suite, mostly in Europe, who have not fully migrated. Some are in a hybrid model and have been using parts of SuccessFactors.

Customers may feel “a lot of trepidation” over the initial HR cloud migration steps, said Stephen Spears, chief revenue officer at SAP. He said SAP is trying to prove with its new Upgrade2Success program “that it’s not difficult to go from their existing HR, on-premises environment to the cloud.”

The problems that stand in the way of an HR cloud migration may be complicated, especially in Europe.

HR investment remains strong

The time may be right for SAP to accelerate its cloud adoption efforts. HR spending remains strong, said analysts, and users are shifting work to HR cloud platforms.

If I were a cloud HR systems provider, I would be very excited for the future, at least in North America.
David Wagnervice president of research, Computer Economics

IDC said HCM applications are forecast to generate just over $15 billion in revenues globally this year, up 8% over 2017. This does not include payroll, just HCM applications, which address core HR functions such as personnel records, benefits administration and workforce management.

The estimated 2018 growth rate is a bit below prior year-over-year growth, which was 9% to 10%, “but still quite strong versus other back office application areas,” said Lisa Rowan, an IDC analyst. Growth is being driven in part by strong interest in replacing older on-premises core HR systems with SaaS-based systems, she said.

Cloud adoption for HR is strong in U.S.

Computer Economics, a research and consulting firm, said that in terms of organizational spending priorities HR is “right down the middle” of the 14 technologies it tracks, said David Wagner, vice president of research at the firm. It surveyed 220 companies ranging from $50 million to multibillion-dollar firms.

“Investment is higher in everything this year,” Wagner said, but IT operational budgets are not going up very fast and the reason is the cloud transition. Organizations are converting legacy systems to cloud systems and investing the savings back into the IT budget. “They’re converting to the cloud as fast as is reasonable in organizations right now,” he said.

“If I were a cloud HR systems provider, I would be very excited for the future, at least in North America,” Wagner said.

Cloud adoption different story in Europe

But Europe, where SAP has about 80% of its on-premises users, may be a different story.

Wagner, speaking generally and not specific to SAP, said the problem with cloud adoption in Europe is that there are much more stringent compliance rules around data in the cloud. There’s a lot of concern about data crossing borders and where it’s stored, and how it’s stored and encrypted. “Cloud adoption in general in Europe is behind North America because of those rules,” he said.

SAP’s new cloud adoption program brings together some services and updated tools that help customers make a business case, demonstrate the ROI and help with data integration. It takes on some of the work that a systems integrator might do.

Charles King, an analyst at Pund-IT, said SAP is aiming to reduce the risk and uncertainties involved in a sizable project. 

“That’s a wise move since cost, risk and uncertainty is the unholy trinity of bugaboos that plague organizations contemplating such substantial changes,” King said.

Azure ExpressRoute updates – New partnerships, monitoring and simplification

Azure ExpressRoute allows enterprise customers to privately and directly connect to Microsoft’s cloud services, providing a more predictable networking experience than traditional internet connections. ExpressRoute is available in 42 peering locations globally and is supported by a large ecosystem of more than 100 connectivity providers. Leading customers use ExpressRoute to connect their on-premises networks to Azure, as a vital part of managing and running their mission critical applications and services.

Cisco to build Azure ExpressRoute practice

As we continue to grow the ExpressRoute experience in Azure, we’ve found our enterprise customers benefit from understanding networking issues that occur in their internal networks with hybrid architectures. These issues can impact their mission-critical workloads running in the cloud.

To help address on-premises issues, which often require deep technical networking expertise, we continue to partner closely with Cisco to provide a better customer networking experience. Working together, we can solve the most challenging networking issues encountered by enterprise customers using Azure ExpressRoute.

Today, Cisco announced an extended partnership with Microsoft to build a new network practice providing Cisco Solution Support for Azure ExpressRoute.   We are fully committed to working with Cisco and other partners with deep networking experience to build and expand on their networking practices and help accelerate our customers’ journey to Azure.

Cisco Solution Support provides customers with additional centralized options for support and guidance for Azure ExpressRoute, targeting the customers on premises end of the network.

New monitoring options for ExpressRoute

To provide more visibility into ExpressRoute network traffic, Network Performance Monitor (NPM) for ExpressRoute will be generally available in six regions in mid-February, following a successful preview announced at Microsoft Ignite 2017. NPM enables customers to continuously monitor their ExpressRoute circuits and alert on several key networking metrics including availability, latency, and throughput in addition to providing graphical view of the network topology. 

NPM for ExpressRoute can easily be configured through the Azure portal to quickly start monitoring your connections.

We will continue to enhance the footprint, features and functionality of NPM of ExpressRoute to provide richer monitoring capabilities for ExpressRoute. 

ExpressRoute1

ExpressRoute2

Figure 1: Network Performance Monitor and Endpoint monitoring simplifies ExpressRoute monitoring

Endpoint monitoring for ExpressRoute enables customers to monitor connectivity not only to PaaS services such as Azure Storage but also SaaS services such as Office 365 over ExpressRoute. Customers can continuously measure and alert on the latency, jitter, packet loss and topology of their circuits from any site to PaaS and SaaS services. A new preview of Endpoint Monitoring for ExpressRoute will be available in mid-February.

Simplifying ExpressRoute peering

To further simplify management and configuration of ExpressRoute we have merged public and Microsoft peerings. Now available on Microsoft peering are Azure PaaS services such as Azure Storage and Azure SQL along with Microsoft SaaS services (Dynamics 365 and Office 365). Access to your Azure Virtual Networking remains on private peering.

ExpressRoute with Microsoft peering and private peering

Figure 2: ExpressRoute with Microsoft peering and private peering

ExpressRoute, using BGP, provides Microsoft prefixes to your internal network. Route filters allow you to select the specific Office 365 or Dynamics 365 services (prefixes) accessed via ExpressRoute. You can also select Azure services by region (e.g. Azure US West, Azure Europe North, Azure East Asia). Previously this capability was only available on ExpressRoute Premium. We will be enabling Microsoft peering configuration for standard ExpressRoute circuits in mid-February.

Manage rules

New ExpressRoute locations

ExpressRoute is always configured as a redundant pair of virtual connections across two physical routers. This highly available connection enables us to offer an enterprise-grade SLA. We recommend that customers connect to Microsoft in multiple ExpressRoute locations to meet their Business Continuity and Disaster Recovery (BCDR) requirements. Previously this required customers to have ExpressRoute circuits in two different cities. In select locations we will provide a second ExpressRoute site in a city that already has an ExpressRoute site. A second peering location is now available in Singapore. We will add more ExpressRoute locations within existing cities based on customer demand. We’ll announce more sites in the coming months.

AI, machine learning ahead for Box content management platform

Content management spans enterprise content management, web content management, cloud, on premises and all the combinations thereof. What used to be a well-defined, vendor-versus-vendor competition is now spread across multiple categories, platforms and hosting sites.

Box content management is more of an enterprise than web content tool, competing with Documentum, SharePoint and OpenText more than Drupal and WordPress. It is all cloud, and it has become an API-driven collaboration platform, as well. We caught up with Jeetu Patel, Box’s chief product officer, to discuss his company’s roadmap for competing in this changing market.

How does Box content management and its 57 million users fit into the overall enterprise content management (ECM) market right now?

Jeetu Patel: There’s a convergence of multiple different markets, because what’s happened, unfortunately, in this industry, people have taken a market-centric view, rather than a customer- and user-centric view. When you think about what people want to do, they might start with a piece of content that’s unstructured — like yourself, with this article — and [share and collaborate with people inside and outside of the organization and eventually] publish it to the masses.

Jeetu Patel, chief product officer, BoxJeetu Patel

During that entire lifecycle, it’s pretty important your organization is maintaining intellectual property, governance and security around it. You might even have a custom mobile app, and you might want to make sure how the content is coming from the same content infrastructure. Look at all the people served in this lifecycle; eventually, that content might get archived and disposed. Typically, there’s like 17 different systems that piece of content might go through to have all these things done with it. This seemed like a counterproductive way to work.

Our thinking from the beginning was, ‘Shouldn’t there be a single source of truth where a set of policies can be applied to content, a place where you could collaborate and work on content, but if you have other applications that you’re using, you should still be able to integrate them into the content and share with people inside and outside the organization?’ That’s what we think this market should evolve into … and we call that cloud content management.

Smart search, AI, those things are on other vendors’ product roadmaps. What’s ahead for Box content management?

Patel: The three personas we serve are the end users, like you writing this article; enterprise IT admins and the enterprise security professionals; and the developer who’s building an application, but doesn’t want to rebuild content management infrastructure every time they serve up a piece of content, [but instead use API calls such as Box’s high-fidelity viewer for videos].

When we think about the roadmap, we have to be thinking at scale, for millions of users across hundreds of petabytes of data, and make it frictionless so that people who aren’t computer jockeys — just average people — use our system. One of our key philosophies is identifying megatrends and making them tailwinds for our business.

Looking back 13 years ago when we started, some of the big trends that have happened were cloud and mobile. We used those to propel our business forward. Now, it’s artificial intelligence and machine learning. In the next five years, content management is going to look completely different than it has the last 25.

In the next five years, content management is going to look completely different than it has the last 25.
Jeetu Patelchief product officer at Box

Content’s going to get meaningfully more intelligent. Machine learning algorithms should, for example, detect all the elements in an image, OCR [optical character recognition] the text and automatically populate the metadata without doing any manual data entry. Self-describing. [Sentiment analysis] when you’re recording every single call at a call center. Over time, the ultimate nirvana is that you’ll never have to search for and open up an unstructured piece of content — you just get an answer. We want to make sure we take advantage of all those innovations and bring them to Box.

How does Box content management compete with SharePoint, which is ingrained in many organizations and must be a formidable competitor, considering the always-expanding popularity of SharePoint Online?

Patel: Microsoft is an interesting company. They are one of our biggest competitors, with SharePoint and OneDrive, and one of our biggest partners, with Azure. We partner with them very closely for Azure and the Office 365 side of the house. And we think, [with Box migrations,] there’s an area where there’s an opportunity for customers to [reduce] fragmented [SharePoint] infrastructure and have a single platform to make it easy for user, administrator and developer to work end to end … and modernizing their business processes, as well.

Modernize their business processes?

Patel: Once you migrate the content over to Box, there’s a few things that happen you weren’t able to do in the past. For example, you can now make sure users can access content anywhere on any device, which you couldn’t do in the past without going through a lot of hoops. Try sharing a piece of content with someone outside of your organization that you started in OneDrive and moved over to SharePoint. They actually have a troubleshooting page for it. It’s not just SharePoint; it’s any legacy ECM system that has this problem. We want to make sure we solve that.

Midmarket enterprises push UCaaS platform adoption

Cloud unified communications adoption is growing among midmarket enterprises as they look to improve employee communication, productivity and collaboration. Cloud offerings, too, are evolving to meet midmarket enterprise needs, according to a Gartner Inc. report on North American midmarket unified communications as a service (UCaaS).

Gartner, a market research firm based in Stamford, Conn., defines the midmarket as enterprises with 100 to 999 employees and revenue between $50 million and $1 billion. UCaaS spending in the midmarket segment reached nearly $1.5 billion in 2017 and is expected to hit almost $3 billion by 2021, according to the report. Midmarket UCaaS providers include vendors ranked in Gartner’s UCaaS Magic Quadrant report. The latest Gartner UCaaS midmarket report, however, examined North American-focused providers not ranked in the larger Magic Quadrant report, such as CenturyLink, Jive and Vonage.

But before deploying a UCaaS platform, midmarket IT decision-makers must evaluate the broader business requirements that go beyond communication and collaboration.

Evaluating the cost of a UCaaS platform

The most significant challenge facing midmarket IT planners over the next 12 months is budget constraints, according to the report. These constraints play a major role in midmarket UC decisions, said Megan Fernandez, Gartner analyst and co-author of the report.

“While UCaaS solutions are not always less expensive than premises-based solutions, the ability to acquire elastic services with straightforward costs is useful for many midsize enterprises,” she said.

Many midmarket enterprises are looking to acquire UCaaS functions as a bundled service rather than stand-alone functions, according to the report. Bundles can be more cost-effective as prices are based on a set of features rather than a single UC application. Other enterprises will acquire UCaaS through a freemium model, which offers basic voice and conferencing functionality.

“We tend to see freemium services coming into play when organizations are trying new services,” she said. “Users might access the service and determine if the freemium capabilities will suffice for their business needs.”

For some enterprises, this basic functionality will meet business requirements and offer cost savings. But other enterprises will upgrade to a paid UCaaS platform after using the freemium model to test services.

Cloud adoption
Enterprises are putting more emphasis on cloud communications services.

Addressing multiple network options

Midmarket enterprises have a variety of network configurations depending on the number of sites and access to fiber. As a result, UCaaS providers offer multiple WAN strategies to connect to enterprises. Midmarket IT planners should ensure UCaaS providers align with their companies’ preferred networking approach, Fernandez said.

Enterprises looking to keep network costs down may connect to a UCaaS platform via DSL or cable modem broadband. Enterprises with stricter voice quality requirements may pay more for an IP MPLS connection, according to the report. Software-defined WAN (SD-WAN) is also a growing trend for communications infrastructure. 

“We expect SD-WAN to be utilized in segments with requirements for high QoS,” Fernandez said. “We tend to see more requirements for high performance in certain industries like healthcare and financial services.”

Team collaboration’s influence and user preferences

Team collaboration, also referred to as workstream collaboration, offers similar capabilities as UCaaS platforms, such as voice, video and messaging, but its growing popularity won’t affect how enterprises buy UCaaS, yet.

Fernandez said team collaboration is not a primary factor influencing UCaaS buying decisions as team collaboration is still acquired at the departmental or team level. But buying decisions could shift as the benefits of team-oriented management become more widely understood, she said.

“This means we’ll increasingly see more overlap in the UCaaS and workstream collaboration solution decisions in the future,” Fernandez said.

Intuitive user interfaces have also become an important factor in the UCaaS selection process as ease of use will affect user adoption of a UCaaS platform. According to the report, providers are addressing ease of use demands by trying to improve access to features, embedding AI functionality and enhancing interoperability among UC services.

Chip bugs hit cloud computing usage less than first feared

In the aftermath of one of the largest compute vulnerability disclosures in years, it turns out that cloud computing usage won’t suffer greatly after all.

Public clouds were potentially among the most imperiled architectures from the Spectre and Meltdown chip vulnerabilities. But at least from the initial patches, the impact to these platforms’ security and performance appears to be less dire than predicted.

Many industry observers expressed concern that these chip-level vulnerabilities would make the multitenant cloud model a conspicuous target for hackers to gain access to data in other users’ accounts on the same shared host. But major cloud vendors’ quick responses – in some cases months ago — have largely addressed those issues.

Customers must still update systems that live on top of the cloud, but with the underlying patches, cloud environments are well-positioned to address the initial concerns about data theft. And cloud customers have far less to do than a company that owns its own data center and needs to update its hardware, microcode, hypervisor and perhaps management instances.

“The sky is not falling; just relax,” said Chris Gardner, an analyst with Forrester Research. “They’re probably the most critical CPU bugs we’ve seen in quite some time, but the mitigations help and the chip manufacturers are already working on long-term solutions.”

In some ways, vendors’ rapid response to install fixes to the Meltdown and Spectre vulnerabilities also illustrates their centralization and automation chops.

“We couldn’t have worked with hardware vendors and open source projects like Linux at the pace they were able to do to patch project,” said Joe Kinsella, CTO and founder of CloudHealth, a cloud managed service provider in Boston. “The end result is a testament to the centralization of ability to actually go and respond.”

Security experts say there are no known exploits in the wild for the Meltdown and the two-pronged Spectre vulnerabilities. The execution of a hack through these vulnerabilities, especially Spectre, is beyond the scope of the average hacker, who is far more likely to find a path of less resistance, they say.

In fact, the real impact from Meltdown and Spectre vulnerabilities so far has been the patching process itself. Microsoft, in particular, riled some of its Azure customers with forced, unscheduled reboots, after reports about Meltdown and Spectre surfaced before the embargo on the disclosure was to be lifted. Google, for its part, said it avoided reboots by live migrating all its customers.

And while Amazon Web Services (AWS), Microsoft, Google and others could quietly get ahead of the problem to varying degrees, smaller cloud companies were often left scrambling.

AMD and Intel have worked on firmware updates to further mitigate the problem, but early versions of these have caused issues of their own.  Updated patches are supposedly imminent, but it’s unclear if they will require another round of cloud provider reboots.

The initial patches to Meltdown and Spectre are stopgap measures — it may take years to redesign chips in a way that doesn’t rely on speculative execution, an optimization technique at the root of these vulnerabilities. It’s also possible that any fundamental redesign of these chips could ultimately benefit cloud vendors, which swap out hardware more frequently than traditional enterprises and thus could jump on the new processors faster.

I can’t imagine a chief risk officer or chief security officer saying this is inconsequential to what we’re going to do in the future.
Marty PuranikCEO, Atlantic.Net

These flaws could cause potential customers to rein in their cloud computing usage, or do additional due diligence before they transition out of their own data centers. This is particularly true in the financial sector and other heavily regulated industries that have just begun to warm to the public cloud.

“If you [are] starting a new project, there’s this question mark that wasn’t there before,” said Marty Puranik, CEO of Atlantic.Net, a cloud hosting provider in Orlando, Fla. “I can’t imagine a chief risk officer or chief security officer saying this is inconsequential to what we’re going to do in the future.”

Performance hits not as bad as first predicted

The other potential fallout from Spectre and Meltdown is how the patches will impact performance. Initial predictions were up to a 30% slowdown, and frustrated customers took to the Internet to highlight major performance hits. Cloud vendors have pushed back on those estimates, however, and multiple managed service providers that oversee thousands of servers on behalf of their clients said that the vast majority of workloads were unaffected.

While it remains to be seen if performance issues will start to emerge over time, IT pros seem to corroborate the providers’ claims. More than a dozen sources — many of whom requested anonymity because of the situation’s sensitive and fluid nature — told SearchCloudComputing that they saw almost no impact from the patches.

The reality is that the number of impacted systems is fairly small and the performance impact is highly variable, said Kinsella. “If it was really 30% I think we’d be having a different conversation because that’s like rolling back a couple years of Moore’s Law,” he said.

Zendesk, based in San Francisco, suspected something was up with its cloud environment following an uptick in reboot notices from AWS toward the end of 2017, said Steve Loyd, vice president of technology operations at Zendesk. Those reboots weren’t exactly welcome, but were better than the alternative, and the company hasn’t seen a big impact from testing patches so far, he said.

Google said it has seen no reports of notable impacts for its cloud customers, while Microsoft and AWS initially said they expected a minority of customers to see performance degradation. It’s unclear how Microsoft has mitigated these issues for those customers, though it has recommended customers switch to a faster networking service that just became generally available. AWS said in a statement that, since installing its patches, it has worked with impacted customers to optimize workloads and “in almost every case, prevent significant changes to their cost.”

The biggest potential exception to these negligible impacts on cloud computing usage would be anything that uses the OS kernel extensively, such as distributed databases or caching systems. Of course, the same type of workload on premises would presumably face the same problem, but even a small impact adds up at scale.

“If any single system doesn’t appear to have more than 1% impact, it’s almost immeasurable,” said Eric Wright, chief evangelist at Turbonomic, a Boston-based hybrid cloud management provider. “But if you have that across 100 systems, you have to add one new virtual system to your load, so no matter how you slice it, there’s some kind of impact.”

Cloud providers also could take more of a hit with customers simply because of their pricing schemes. A company that owns its own data center could just throw some underused servers at the problem. But cloud vendors charge based on CPU cycles, and slower workloads there could have a more pronounced impact, said Pete Lindstrom, an analyst at IDC.

“It’s impressionistic stuff but that’s how security works,” he said. “Really, the question will be what does the monthly bill look like, and is the impact actually there?”

The biggest beneficiary from performance impacts could be abstracted services, such as serverless or platform as a service products. In those scenarios, all the patches are the responsibility of the provider and analysts believe that, to the customer, these services will appear unaltered.

ACI Information Group, a news and social media aggregator, patched its AWS EC2 instances, base AMIs and Docker images. So far the company hasn’t noticed any huge issues, but employees did take note that its serverless workloads required no work on their part to address the problem and the performance was unaffected, said Chris Moyer, vice president of technology at ACI and a TechTarget contributor.

“We have about 40% of our workload on serverless now, so that’s a big win for us too, and another reason to complete our migration entirely,” he said.

Trevor Jones is a senior news writer with SearchCloudComputing and SearchAWS. Contact him at tjones@techtarget.com.