Tag Archives: Kubernetes

Kubernetes authentication project wrestles with migration problems

Kubernetes has not only matured, it’s showing a wrinkle or two.

In case there was any doubt that Kubernetes is no longer a fledgling project, it has begun to show its age with the accumulation of technical debt, as upstream developers struggle to smooth the migration process for a core set of security improvements.

Kubernetes authentication and role-based access control (RBAC) have improved significantly since the project reached version 1.0 four years ago. But one aspect of Kubernetes authentication management remains stuck in the pre-1.0 era: the management of access tokens that secure connections between Kubernetes pods and the Kubernetes API server.

Workload Identity for Google Kubernetes Engine (GKE), released last month, illuminated this issue. Workload Identity is now Google’s recommended approach to Kubernetes authentication between containerized GKE application services and GCP utilities because Workload Identity users no longer have to manage Kubernetes secrets or Google Cloud IAM service account keys themselves. Workload Identity also manages secrets rotation, and can set fine-grained secrets expiration and intended audience policies that ensure good Kubernetes security hygiene.

GKE users are champing at the bit to use Workload Identity because it could eliminate the need to manage this aspect of Kubernetes authentication with a third-party tool such as HashiCorp’s Vault, or worse, with error-prone manual techniques.

“Without this, you end up having to manage a bunch of secrets and access [controls] yourself, which creates lots of opportunities for human error, which has actually bitten us in the past,” said Josh Koenig, head of product at Pantheon Platform, a web operations platform in San Francisco that hosts more than 200,000 Drupal and WordPress sites.

Koenig recalled an incident where a misconfigured development cluster could have exposed an SSL certificate. Pantheon’s IT team caught the mistake, and tore down and reprovisioned the cluster. Pantheon also uses Vault in its production clusters to guard against such errors for more critical workloads but didn’t want that tool’s management overhead to slow down provisioning in the development environment.

“It’s a really easy mistake to make, even with the best of intentions, and the best answer to that is just not having to manage security as part of your codebase,” Koenig said. “There are ways to do that that are really expensive and heavy to implement, and then there’s Google doing it for you [with Workload Identity].”

Kubernetes authentication within clusters lags Workload Identity

Workload Identity uses updated Kubernetes authentication tokens, available in beta since Kubernetes 1.12, to more effectively and efficiently authenticate Kubernetes-based services as they interact with other Google Cloud Platform (GCP) services. Such communications are much more likely than communications within Kubernetes clusters to be targeted by attackers, and there are other ways to shore up security for intracluster communications, including using OpenID Connect tokens based on OAuth 2.0.

However, the SIG-Auth group within the Kubernetes upstream community is working to bring Kubernetes authentication tokens that pass between Kubernetes Pods up to speed with the Workload Identity standard, with automatic secrets rotation, expiration and limited-audience policies. This update would affect all Kubernetes environments, including those that run in GKE, and could allow cloud service providers to assume even more of the Kubernetes authentication management burden on behalf of users.

We think [API Server authentication tokens] improves the default security posture of Kubernetes open source.
Mike DaneseSoftware engineer, Google; chair, Kubernetes SIG-Auth

“Our legacy tokens were only meant for use against the Kubernetes API Server, and using those tokens against something like Vault or Google poses some potential escalations or security risks,” said Mike Danese, a Google software engineer and the chair of Kubernetes SIG-Auth. “We’ll push on [API Server authentication tokens] a little bit because we think it improves the default security posture of Kubernetes open source.”

There’s a big problem, however. A utility to migrate older Kubernetes authentication tokens used with the Kubernetes API server to the new system, dubbed Bound Service Account Token Volumes and available in alpha since Kubernetes 1.13, has hit a show-stopping snag. Early users have encountered a compatibility issue when they try to migrate to the new tokens, which require a new kind of storage volume for authentication data that can be difficult to configure.

Without the new tokens, Kubernetes authentication between pods and the API Server could theoretically face the same risk of human error or mismanagement as GKE services before Workload Identity was introduced, IT experts said.

“The issues that are discussed as the abilities of Workload Identity — to reduce blast radius, require the refresh of tokens, enforce a short time to live and bind them to a single pod — would be the potential impact for current Kubernetes users” if the tokens aren’t upgraded eventually, said Chris Riley, DevOps delivery director at CPrime Inc., an Agile software development consulting firm in Foster City, Calif.

Kubernetes authentication upgrade beta delayed

After Kubernetes 1.13 was rolled out in December 2018, reports indicated that a beta release for Bound Service Account Token Volumes would arrive by Kubernetes 1.15, which was released in June 2019, or Kubernetes 1.16, due in September 2019.

However, the feature did not reach beta in 1.15, and is not expected to reach beta in 1.16, Danese said. It’s also still unknown how the eventual migration to new Kubernetes authentication tokens for the API Server will affect GKE users.

“We could soften our stance on some of the improvements to break legacy clients in fewer ways — for example, we’ve taken a hard stance on file permissions, which disrupted some legacy clients, binding specifically to the Kubernetes API server,” Danese said. “We have some subtle interaction with other security features like Pod Security Policy that we could potentially pave over by making changes or allowances in Pod Security Policies that would allow old Pod Security Policies to use the new tokens instead of the legacy tokens.”

Widespread community attention to this issue seems limited, Riley said, and it could be some time before it’s fully addressed.

However, he added, it’s an example of the reasons Kubernetes users among his consulting clients increasingly turn to cloud service providers to manage their complex container orchestration environment.

“The value of services like Workload Identity is the additional integration these providers are doing into other cloud services, simplifying the adoption and increasing overall reliability of Kubernetes,” Riley said. “I expect them to do the hard work and support migration or simple adoption of future technologies.”

Go to Original Article
Author:

Free Kubernetes security tools broaden enterprise choices

Kubernetes security tools have proliferated in 2018, and their growing numbers reflect increased maturity around container security among enterprise IT shops.

The latest additions to this tool category include a feature in Google Kubernetes Engine called Binary Authorization, which can create whitelists of container images and code that are authorized to run on GKE clusters. All other attempts to launch unauthorized apps will fail, and the GKE feature will document them.

Binary Authorization is in public beta. Google will also make the feature available for on-premises deployments through updates to Kritis, an open source project focused on deployment-time policy enforcement.

Aqua Security also added to the arsenal of Kubernetes security tools at IT pros’ disposal with an open source utility, called kube-hunter, which can be used for penetration testing of Kubernetes clusters. The tool performs passive scans of Kubernetes clusters to look for common vulnerabilities, such as dashboard and management server ports that were left open. These seemingly obvious errors have taken down high-profile companies, such as Tesla, Aviva and Gemalto.

Users can also perform active penetration tests with kube-hunter. In this scenario, the tool attempts to exploit the vulnerabilities it finds as if an attacker has gained access to Kubernetes cluster servers, which may highlight additional vulnerabilities in the environment.

Fernando Montenegro, analyst, 451 ResearchFernando Montenegro

These tools join several other Kubernetes security offerings introduced in 2018 — from Docker Enterprise Edition‘s encryption and secure container registry features for the container orchestration platform to Kubernetes support in tools from Qualys and Alert Logic. The growth of Kubernetes security tools indicates the container security conversation has shifted away from ways to secure individual container images and hosts to security at the level of the application and Kubernetes cluster.

“Containers are not foolproof, but container security is good enough for most users at this point,” said Fernando Montenegro, analyst with 451 Research. “The interest in the industry shifts now to how to do security at the orchestration layer and secure broader container deployments.”

GKE throws down the gauntlet for third-party container orchestration tools

The question for users, as cloud providers add these features, is, why go for a third-party tool when the cloud provider does this kind of thing themselves?
Fernando Montenegroanalyst, 451 Research

Google’s Binary Authorization feature isn’t unique; other on-premises and hybrid cloud Kubernetes tools, such as Docker Enterprise Edition, Mesosphere DC/OS and Red Hat OpenShift, offer similar capabilities to prevent unauthorized container launches on Kubernetes clusters.

However, third-party vendors once again find themselves challenged by a free and open source alternative from Google. Just as Kubernetes supplanted other container orchestration utilities, these additional Kubernetes management features further reduce third-party tools’ competitiveness.

GKE Binary Authorization is one of the first instances of a major cloud provider adding such a feature natively in its Kubernetes service, Montenegro said.

“[A gatekeeper for Kubernetes] is not something nobody’s thought of before, but I haven’t seen much done by other cloud providers on this front yet,” Montenegro said. AWS and Microsoft Azure will almost certainly follow suit.

“The question for users, as cloud providers add these features, is, why go for a third-party tool when the cloud provider does this kind of thing themselves?” Montenegro said.

Aqua Security’s penetration testing tool is unlikely to unseat full-fledged penetration testing tools enterprises use, such as Nmap and Burp Suite, but its focus on Kubernetes vulnerabilities specifically with a free offering will attract some users, Montenegro said.

Aqua Security and its main competitor, Twistlock, also must stay ahead of Kubernetes security features as they’re incorporated into broader enterprise platforms from Google, Cisco and others, Montenegro said.

IT pros debate upstream vs. packaged Kubernetes implementations

Packaged versions of Kubernetes promise ease of use for the finicky container orchestration platform, but some enterprises will stick with a DIY approach to Kubernetes implementation.

Red Hat, Docker, Heptio, Mesosphere, Rancher, Platform9, Pivotal, Google, Microsoft, IBM and Cisco are among the many enterprise vendors seeking to cash in on the container craze with prepackaged Kubernetes implementations for private and hybrid clouds. Some of these products — such Red Hat’s OpenShift Container Platform, Docker Enterprise Edition and Rancher’s eponymous platform — offer their own distribution of the container orchestration software, and most add their own enterprise security and management features on top of upstream Kubernetes code.

However, some enterprise IT shops still prefer to download Kubernetes source code from GitHub and leave out IT vendor middlemen.

“We’re seeing a lot of companies go with [upstream] Kubernetes over Docker [Enterprise Edition] and [Red Hat] OpenShift,” said Damith Karunaratne, director of client solutions for Indellient Inc., an IT consulting firm in Oakville, Ont. “Those platforms may help with management out of the gate, but software license costs are always a consideration, and companies are confident in their technical teams’ expertise.”

The case for pure upstream Kubernetes

One such company is Rosetta Stone, which has used Docker containers in its DevOps process for years, but has yet to put a container orchestration tool into production. In August 2017, the company considered Kubernetes overkill for its applications and evaluated Docker swarm mode as a simpler approach to container orchestration.

Fast-forward a year, however, and the global education software company plans to introduce upstream Kubernetes into production due to its popularity and ubiquity as the container orchestration standard in the industry.

Concerns about Kubernetes management complexity are outdated, given how the latest versions of the tool smooth out management kinks and require less customization for enterprise security features, said Kevin Burnett, DevOps lead for Rosetta Stone in Arlington, Va.

“We’re a late adopter, but we have the benefit of more maturity in the platform,” Burnett said. “We also wanted to avoid [licensing] costs, and we already have servers. Eventually, we may embrace a cloud service like Google Kubernetes Engine more fully, but not yet.”

Burnett said his team prefers to hand-roll its own configurations of open source tools, and it doesn’t want to use features from a third-party vendor’s Kubernetes implementation that may hinder cloud portability in the future.

Other enterprise IT shops are concerned that third-party Kubernetes implementations — particularly those that rely on a vendor’s own distribution of Kubernetes, such as Red Hat’s OpenShift — will be easier to install initially, but could worsen management complexity in the long run.

“Container sprawl combined with a forked Kubernetes runtime in the hands of traditional IT ops is a management nightmare,” said a DevOps transformation leader at an insurance company who spoke on condition of anonymity, because he’s not authorized to publicly discuss the company’s product evaluation process.

His company is considering OpenShift because of an existing relationship with the vendor, but adding a new layer of orchestration and managing multiple control planes for VMs and containers would also be difficult, the DevOps leader predicted, particularly when it comes to IT ops processes such as security patching.

“Why invite that mess when you already have your hands full with a number of packaged containers that you’re going to have to develop security patching processes for?” he said.

Vendors’ Kubernetes implementations offer stability, support

Fork is a fighting word in the open source world, and most vendors say their Kubernetes implementations don’t diverge from pure Kubernetes code. And early adopters of vendors’ Kubernetes implementations said enterprise support and security features are the top priorities as they roll out container orchestration tools, rather than conformance with upstream code, per se.

Amadeus, a global travel technology company, is an early adopter of Red Hat OpenShift. As such, Dietmar Fauser, vice president of core platforms and middleware at Amadeus, said he doesn’t worry about security patching or forked Kubernetes from Red Hat. While Red Hat could theoretically choose to deviate from, or fork, upstream Kubernetes, it hasn’t done so, and Fauser said he doubts the vendor ever will.

Meanwhile, Amadeus is on the cusp of multi-cloud container portability, with instances of OpenShift on Microsoft Azure, Google and AWS public clouds in addition to its on-premises data centers. Fauser said he expects the multi-cloud deployment process will go smoothly under OpenShift.

Multi-tenancy support and a DevOps platform on top of Kubernetes were what made us want to go with third-party vendors.
Surya Suravarapuassistant vice president of product development, Change Healthcare

“Red Hat is very good at maintaining open source software distributions, patching is consistent and easy to maintain, and I trust them to maintain a portable version of Kubernetes,” Fauser said. “Some upstream Kubernetes APIs come and go, but Red Hat’s approach offers stability.”

Docker containers and Kubernetes are de facto standards that span container environments and provide portability, regardless of which vendor’s Kubernetes implementation is in place, said Surya Suravarapu, assistant vice president of product development for Change Healthcare, a healthcare information technology company in Nashville, Tenn., that spun out of McKesson in March 2017.

Suravarapu declined to specify which vendor’s container orchestration tools the company uses, but said Change Healthcare uses multiple third-party Kubernetes tools and plans to put containers into production this quarter.

“Multi-tenancy support and a DevOps platform on top of Kubernetes were what made us want to go with third-party vendors,” Suravarapu said. “The focus is on productivity improvements for our IT teams, where built-in tooling converts code to container images with the click of a button or one CLI [command-line interface] line, and compliance and security policies are available to all product teams.”

A standard way to manage containers in Kubernetes offers enough consistency between environments to improve operational efficiency, while portability between on-premises, public cloud and customer environments is a longer-term goal, Suravarapu said.

“We’re a healthcare IT company,” he added. “We can’t just go with a raw tool without 24/7 enterprise-level support.”

Still, Amadeus’s Fauser acknowledged there’s risk to trust one vendor’s Kubernetes implementation, especially when that implementation is one of the more popular market options.

“Red Hat wants to own the whole ecosystem, so there’s the danger that they could limit other companies’ access to providing plug-ins for their platform,” he said.

That hasn’t happened, but the risk exists, Fauser said.

GCP Marketplace beats AWS, Azure to Kubernetes app store

Google Cloud Platform has made the first foray into a new frontier of Kubernetes ease of use with a cloud app store that includes apps preconfigured to run smoothly on the notoriously persnickety container orchestration platform.

As Kubernetes becomes the de facto container orchestration standard for enterprise DevOps shops, helping customers tame the notoriously complex management of the platform has become de rigueur for software vendors and public cloud service providers. Google stepped further into that territory with GCP Marketplace, a Kubernetes app store with application packages that can automatically be deployed onto container clusters with one click and then billed through Google Cloud Platform as a service.

A search of the AWS Marketplace for “Kubernetes” turned up Kubernetes infrastructure packages by Rancher, Bitnami and CoreOS, but not prepackaged apps from vendors such as Nginx and Elastic ready to be deployed on Kubernetes clusters, which is what GCP Marketplace offers. Another search of the Azure Marketplace returned similar results.

Just because Google is first to market with this Kubernetes app store doesn’t mean that what the company has done is magic.

“These marketplaces are based on Kubernetes template technologies such as Helm charts, so they’re widely available to everyone,” said Gary Chen, an analyst at IDC in Framingham, Mass. “I’m sure if [AWS and Azure] don’t have it, they are already working on something like this.”

Google executives said Helm charts factored into some of the app packages it created with partners, but in some cases it created GCP Marketplace offerings by way of its work with independent software vendors.

“Our approach gives vendors flexibility to use Helm or other packaging mechanisms, given that there isn’t a clear standard today,” the company said through a spokesperson.

GCP Marketplace Kubernetes apps
A screenshot of Kubernetes apps in the revamped GCP Marketplace.

Initially, GCP Marketplace apps offer click-to-deploy support onto Kubernetes clusters that run in Google Kubernetes Engine, and GKE does telemetry monitoring and logging on those apps in addition to offering billing support.

But there’s nothing that precludes these packages to eventually run on premises or even in other public cloud providers’ infrastructures, said Jennifer Lin, product director for Google Cloud.

“It’s always within the realm of possibility, but not something we’re announcing today,” she said.

There is precedent for GCP products with third-party cloud management capabilities — the Google Stackdriver cloud monitoring tool, for example, can be used with AWS and Azure resources.

Initial app partners include the usual suspects among open source cloud-native infrastructure and middleware projects such as GitLab, Couchbase, CloudBees, Cassandra, InfluxDB, Elasticsearch, Prometheus, Nginx, RabbitMQ and Apache Spark. Commonly used web apps such as WordPress and a container security utility from Aqua Security Software will also be available in the initial release of GCP Marketplace.

Mainstream enterprise customers will look for more traditional apps such as MySQL, SQL Server and Oracle databases, as well as back-office and productivity apps. Lin said Google plans more mainstream app support, but she declined to specify which apps are on the GCP Marketplace roadmap.

Kubernetes hybrid cloud emerges from Google-Cisco partnership

A forthcoming Kubernetes hybrid cloud option that joins products from Cisco and Google promises smoother portability and security, but at this point its distinguishing features remain theoretical.

Cisco plans to release the Cisco Container Platform (CCP) in the first quarter of 2018, with support for Kubernetes container orchestration on its HyperFlex hyper-converged infrastructure product. Sometime later this year, a second version of the container platform will link up with Google Kubernetes Engine to deliver a Kubernetes hybrid cloud offering based on the Cisco-Google partnership made public in October 2017.

“Cisco can bring a consistent hybrid cloud experience to our customers,” said Thomas Scherer, chief architect at Telindus Telecom, an IT service provider in Belgium and longtime Cisco partner that plans to offer hosted container services based on CCP. Many enterprises already use Cisco’s products, which should boost CCP’s appeal, he said.

CCP 2.0 will extend the Cisco Application Centric Infrastructure software-defined network fabric into Google’s public cloud, and enable stretched Kubernetes clusters between on-premises data centers and public clouds, Cisco executives said. Stretched clusters would enable smooth container portability between multiple infrastructures, one of the most attractive promises of Kubernetes hybrid clouds for enterprise IT shops reluctant to move everything to the cloud. CCP also will support Microsoft Azure and Amazon Web Services public clouds, and eventually CCP will incorporate DevOps monitoring tools from AppDynamics, another Cisco property.

“Today, if I have a customer that is using containers, I put them on a dedicated hosting infrastructure, because I don’t have enough confidence that I can maintain customer segregation [in a container environment],” Scherer said. “I hope that Cisco will deliver in that domain.”

He also expects that the companies’ strengths in enterprise data center and public cloud infrastructure components will give the Kubernetes hybrid cloud a unified multi-cloud dashboard with container management.

“Is it going to be easy? No, and the components included in the product may change,” he said. “But my expectation is that it will happen.”

Google public cloud servers in Georgia
Version 2 of the Cisco Container Platform will connect enterprise data centers with Google’s public cloud infrastructure shown here.

Kubernetes hybrid cloud decisions require IT unity

Cisco customers have plenty of other Kubernetes hybrid cloud choices to consider, some of which are already available. Red Hat and AWS joined forces last year to integrate Red Hat’s Kubernetes-based OpenShift Container Platform with AWS services. Microsoft has its Azure public cloud and Azure Stack for on-premises environments, and late last year added Azure Container Service Engine to Azure Stack with support for Kubernetes container management templates.

What Cisco is trying to do, along with other firms, is to expand its appeal to infrastructure and operations teams with monitoring, security and analytics features not included in Kubernetes.
Stephen Elliotanalyst, IDC

However, many enterprises continue to kick the tires on container orchestration software and most do not run containers in production, which means the Cisco-Google partnership has a window to establish itself.

“Kubernetes support is table stakes at this point,” said Stephen Elliot, analyst at IDC. “Part of what Cisco is trying to do, along with other firms, is to expand its appeal to infrastructure and operations teams with monitoring, security and analytics features not included in Kubernetes.”

As Kubernetes hybrid cloud options proliferate, enterprise IT organizations must unite traditionally separate buyers in security, application development, IT management and IT operations to evaluate and select a product. Otherwise, each constituency will be swayed by its established vendor’s product and chaos could ensue, Elliot said.

“There are a lot of moving parts, and organizations are grappling with whom in their organization to turn to for leadership,” he said. “Different buyers can’t make decisions in a vacuum anymore, and there are a lot of politics involved.”

Beth Pariseau is senior news writer for TechTarget’s Cloud and DevOps Media Group. Write to her at [email protected] or follow @PariseauTT on Twitter.

Kubernetes storage projects dominate CNCF docket

Enterprise IT pros should get ready for Kubernetes storage tools, as the Cloud Native Computing Foundation seeks ways to support stateful applications.

The Cloud Native Computing Foundation (CNCF) began its quest to develop container storage products this week when it approved an inception-level project called Rook, which connects Kubernetes orchestration to the Ceph distributed file system through the Kubernetes operator API.

The Rook project’s approval illustrates the CNCF’s plans to emphasize Kubernetes storage.

“It’s going to be a big year for storage in Kubernetes, because the APIs are a little bit more solidified now,” said CNCF COO Chris Aniszczyk. The operator API and a Container Storage Interface API were released in the alpha stage with Kubernetes 1.9 in December. “[The CNCF technical board is] saying that the Kubernetes operator API is the way to go in [distributed container] storage,” he said.

Rook project gave Prometheus a seat on HBO’s Iron Throne

HBO wanted to deploy Prometheus for Kubernetes monitoring, and it ideally would have run the time-series database application on containers within the Kubernetes cluster, but that didn’t work well with cloud providers’ persistent storage volumes.

Illya Chekrygin, UpboundIllya Chekrygin

“You always have to do this careful coordination to make sure new containers only get created in the same availability zone. And if that entire availability zone goes away, you’re kind of out of luck,” said Illya Chekrygin, who directed HBO’s implementation of containers as a senior staff engineer in 2017. “That was a painful experience in terms of synchronization.”

Moreover, when containers that ran stateful apps were killed and restarted in different nodes of the Kubernetes cluster, it took too long to unmount, release and remount their attached storage volumes, Chekrygin said.

Rook was an early conceptual project in GitHub at that time, but HBO engineers put it into a test environment to support Prometheus. Rook uses a storage overlay that runs within the Kubernetes cluster and configures the cluster nodes’ available disk space as a giant pool of resources, which is in line with how Kubernetes handles CPU and memory resources.

Rather than synchronize data across multiple specific storage volumes or locations, Rook uses the Ceph distributed file system to stripe the data across multiple machines and clusters and to create multiple copies of data for high availability. That overcomes the data synchronization problem, and it avoids the need to unmount and remount external storage volumes.

“It’s using existing cluster disk configurations that are already there, so nothing has to be mounted and unmounted,” Chekrygin said. “You avoid external storage resources to begin with.”

At HBO, a mounting and unmounting process that took up to an hour was reduced to two seconds, which was suitable for the Kubernetes monitoring system in Prometheus that scraped telemetry data from the cluster every 10 to 30 seconds.

However, Rook never saw production use at HBO, which, by policy, doesn’t put prerelease software into production. Instead, Chekrygin and his colleagues set up an external Prometheus instance that received a relay of monitoring data from an agent inside the Kubernetes cluster. That worked, but it required an extra network hop for data and made Prometheus management more complex.

“Kubernetes provides a lot of functionality out of the box, such as automatically restarting your Pod if your Pod dies, automatic scaling and service discovery,” Chekrygin said. “If you run a service somewhere else, it’s your responsibility on your own to do all those things.”

Kubernetes storage in the spotlight

Kubernetes is ill-equipped to handle data storage persistence … this is the next frontier and the next biggest thing.
Illya Chekryginfounding member, Upbound

The CNCF is aware of the difficulty organizations face when they try to run stateful applications on Kubernetes. As of this week, it now owns the intellectual property and trademarks for Rook, which currently lists Quantum Corp. and Upbound, a startup in Seattle founded by Rook’s creator, Bassam Tabbara, as contributors to its open source code. As an inception-level project, Rook isn’t a sure thing, more akin to a bet on an early stage idea. It has about a 50-50 chance of panning out, CNCF’s Aniszczyk said.

Inception-level projects must update their presentations to the technical board once a year to continue as part of CNCF. From the inception level, projects may move to incubation, which means they’ve collected multiple corporate contributors and established a code of conduct and governance procedures, among other criteria. From incubation, projects then move to the graduated stage, although the CNCF has yet to even designate Kubernetes itself a graduated project. Kubernetes and Prometheus are expected to graduate this year, Aniszczyk said.

The upshot for container orchestration users is Rook will be governed by the same rules and foundation as Kubernetes itself, rather than held hostage by a single for-profit company. The CNCF could potentially support more than one project similar to Rook, such as Red Hat’s Gluster-based Container Native Storage Platform, and Aniszczyk said those companies are welcome to present them to the CNCF technical board.

Another Kubernetes storage project that may find its way into the CNCF, and potentially complement Rook, was open-sourced by container storage software maker Portworx this week. The Storage Orchestrator Runtime for Kubernetes (STORK) uses the Kubernetes orchestrator to automate operations within storage layers such as Rook to respond to applications’ needs. However, STORK needs more development before it is submitted to the CNCF, said Gou Rao, founder and CEO at Portworx, based in Los Altos, Calif.

Kubernetes storage seems like a worthy bet to Chekrygin, who left his three-year job with HBO this month to take a position as an engineer at Upbound.

“Kubernetes is ill-equipped to handle data storage persistence,” he said. “I’m so convinced that this is the next frontier and the next biggest thing, I was willing to quit my job.”

Beth Pariseau is senior news writer for TechTarget’s Cloud and DevOps Media Group. Write to her at [email protected] or follow @PariseauTT on Twitter.

Kubernetes networking expands its horizons with service mesh

Enterprise IT operations pros who support microservices face a thorny challenge with Kubernetes networking, but service mesh architectures could help address their concerns.

Kubernetes networking under traditional methods faces performance bottlenecks. Centralized network resources must handle an order of magnitude more connections once the user migrates from VMs to containers. As containers appear and disappear much more frequently, managing those connections at scale quickly can create confusion on the network, and stale information inside network management resources can even misdirect traffic.

IT pros at KubeCon this month got a glimpse at how early adopters of microservices have approached Kubernetes networking issues with service mesh architectures. These network setups are built around sidecar containers, which act as a proxy for application containers on internal networks. Such proxies offload networking functions from application containers and offer a reliable way to track and apply network security policies to ephemeral resources from a centralized management interface.

Proxies in a service mesh better handle one-time connections between microservices than can be done with traditional networking models. Service mesh proxies also tap telemetry information that IT admins can’t get from other Kubernetes networking approaches, such as transmission success rates, latencies and traffic volume on a container-by-container basis.

“The network should be transparent to the application,” said Matt Klein, a software engineer at San Francisco-based Lyft, which developed the Envoy proxy system to address networking obstacles as the ride-sharing company moved to a microservices architecture over the last five years.

“People didn’t trust those services, and there weren’t tools that would allow people to write their business logic and not focus on all the faults that were happening in the network,” Klein said.

With a sidecar proxy in Envoy, each of Lyft’s services only had to understand its local portion of the network, and the application language no longer factored in its function. At the time, only the most demanding web application required proxy technology such as Envoy. But now, the complexity of microservices networking makes service mesh relevant to more mainstream IT shops.

The National Center for Biotechnology Information (NCBI) in Bethesda, Md., has laid the groundwork for microservices with a service mesh built around Linkerd, which was developed by Buoyant. The bioinformatics institute used Linkerd to modernize legacy applications, some as many as 30 years old, said Borys Pierov, a software developer at NCBI.

Any app that uses the HTTP protocol can point to the Linkerd proxy, which gives NCBI engineers improved visibility and control over advanced routing rules in the legacy infrastructure, Pierov said. While NCBI doesn’t use Kubernetes yet — it uses HashiCorp Consul and CoreOS rkt container runtime instead of Kubernetes and Docker — service mesh will be key to container networking on any platform.

“Linkerd gave us a look behind the scenes of our apps and an idea of how to split them into microservices,” Pierov said. “Some things were deployed in strange ways, and microservices will change those deployments, including the service mesh that moves us to a more modern infrastructure.”

Matt Klein speaks at KubeCon
Matt Klein, software engineer at Lyft, presents the company’s experiences with service mesh architectures at KubeCon.

Kubernetes networking will cozy up with service mesh next year

Linkerd is one of the most well-known and widely used tools among the multiple open source service mesh projects in various stages of development. However, Envoy has gained notoriety because it underpins a fresh approach to the centralized management layer, called Istio. This month, Buoyant also introduced a better performing and efficient successor to Linkerd, called Conduit.

Linkerd gave us a look behind the scenes of our apps … Some things were deployed in strange ways, and microservices will change those deployments, including the service mesh that moves us to a more modern infrastructure.
Borys Pierovsoftware developer, National Center for Biotechnology Information

It’s still too early for any of these projects to be declared the winner. The Cloud Native Computing Foundation (CNCF) invited Istio’s developers, which include IBM, Microsoft and Lyft, to make the Istio CNCF project, CNCF COO Chris Aniszczyk said at KubeCon. But Buoyant also will formally present Conduit to the CNCF next year, and multiple projects could coexist within the foundation, Aniszczyk said.

Kubernetes networking challenges led Gannett’s USA Today Network to create its own “terrible, over-orchestrated” service mesh-like system, in the words of Ronald Lipke, senior engineer on the USA Today platform-as-a-service team, who presented on the organization’s Kubernetes experience at KubeCon. HAProxy and the Calico network management system have supported Kubernetes networking in production so far, but there have been problems under this system with terminating nodes cleanly and removing them from Calico quickly so traffic isn’t misdirected.

Lipke likes the service mesh approach, but it’s not yet a top priority for his team at this early stage of Kubernetes deployment. “No one’s really asking for it yet, so it’s taken a back seat,” he said.

This will change in the new year. The company plans to rethink the HAproxy approach to reduce its cloud resource costs and improve network tracing for monitoring purposes. The company has done proof-of-concept evaluations around Linkerd and plans to look at Conduit, he said in an interview after his KubeCon session.

Beth Pariseau is senior news writer for TechTarget’s Data Center and Virtualization Media Group. Write to her at [email protected] or follow @PariseauTT on Twitter.

Rancher’s Kubernetes strategy relieves container complexity

Rancher Kubernetes support will be its default approach to container orchestration for customers, and it’s another sign that enterprises have picked a winner in this emerging field.

As of Rancher 2.0, released this week, all of its customers will be Kubernetes users from the moment they install the company’s container management software. Rancher has previously supported other container orchestration tools, including its own Cattle product, but will prioritize Kubernetes in the future.

“Kubernetes will be a fundamental part of the enterprise IT infrastructure, and companies like us have to keep adapting to stay in the game,” said Sheng Liang, the company’s co-founder and CEO. “It could bring vendors together to define a standard infrastructure platform for the industry.”

Rancher’s Kubernetes decision was made possible by features added in Kubernetes 1.6, such as more flexible role-based access control, Liang said. Kubernetes 1.6 allows for impersonation, which means Rancher can smooth the way for hybrid cloud deployments of Kubernetes with Active Directory support for Google’s Container Engine (GKE). Previously, hybrid cloud environments that used GKE would require every user to have a Google credential in addition to whatever user authentication program the company uses on premises, such as Active Directory or LDAP.

Rancher 2.0 can also centralize management for multiple container clusters that use different versions and distributions of Kubernetes. IT administrators can import clusters into Rancher without the need to rebuild them or pool them through Kubernetes Cluster Federation, Liang said.

Rancher’s Kubernetes choice wins customer approval

Rancher users support the company’s new direction, and in their eyes Kubernetes has captured the lead in container orchestration.

At Sling TV, a subsidiary of Dish Network, Rancher with Kubernetes support won a bake-off in May 2016 against Pivotal Cloud Foundry, Docker Datacenter and Mesosphere DC/OS. At the time, Kubernetes was the most mature and affordable of the container orchestration platforms, said Brad Linder, DevOps and big data evangelist at Dish Technologies, the IT arm of Dish Networks in Englewood, Colo.

“Each of the other systems had a deficiency of some sort: Pivotal Cloud Foundry was pricey, and at the time Docker Datacenter had issues with routing traffic to containers before they became available,” Linder said. “DC/OS seemed better suited to larger clusters with thousands of nodes, not the kind of deployments we were looking for.”

Docker has since shored up traffic routing in clusters with the swarm mode routing mesh it added to Docker Datacenter with version 1.12 in November 2016, but Linder’s team was also drawn to Kubernetes by the tools that had already been built around it, including Rancher. Even then, he said, it was clear Kubernetes would be broadly supported, and that could pay portability dividends down the road.

“I’m trying not to hitch my wagon to any one vendor as we build out our approach to cloud services,” Linder said. “I don’t want to have to commit to any of them.”

With vanilla Kubernetes under the hood on his selected tools, Linder won’t be bound to one cloud computing vendor.

Rancher has been essential for Dish to roll out container clusters, Linder said. The company plans to launch its first production cluster — a new push notification app deployment for Sling TV — by the end of the year, and Rancher will support the whole stack.

“There have been times they’ve helped us troubleshoot network and VM issues, and helped us come up to speed with containers generally,” he said.

Kubernetes installation is actually the easy part. We’ve had some head-scratching moments with logging distributed microservices and solving the complexity of container networking.
Brad LinderDevOps and big data evangelist, Dish Technologies

Rancher makes Kubernetes setup easier with a UI that helps admins interpret the “YAML files everywhere” that are a part of upstream Kubernetes installations, Linder said. Rancher has also helped with connected tools, such as the open source Prometheus monitoring utility and virtual network overlays.

“Kubernetes installation is actually the easy part,” Linder said. “We’ve had some head-scratching moments with logging distributed microservices and solving the complexity of container networking.”

Rancher 2.0 adds further refinements that will help with container management, such as a new integration with continuous integration and continuous deployment tool Jenkins that smooths the connection between CI/CD pipelines and Kubernetes, Linder said.

Kubernetes integration strategy reflects growing trend

Rancher’s Kubernetes alliance continues a year of momentum growth for the Google-backed container orchestration platform. Big IT vendors, including Amazon Web Services, Microsoft and Oracle, joined the Cloud Native Computing Foundation in the summer of 2017 to help govern Kubernetes development. Erstwhile Kubernetes rival Mesosphere rolled out Kubernetes support in version 1.10 of its DC/OS software earlier this month, and mid-September configuration management player Puppet acquired Distelli, which bases its container management software product on Kubernetes as well.

These changes indicate Kubernetes has become “the clear and outright leader” in container management and orchestration platforms, said Jay Lyman, an analyst at 451 Research in New York. Several dozen vendors support Kubernetes for container orchestration, while only about a dozen each back Docker swarm mode and Apache Mesos, he said.

“IT organizations almost have to have Kubernetes on their radar and a strategy around it,” Lyman said. “Apprehension about its complexity had been an impediment to its growth, but the excitement is greater than that apprehension at this point.”

Rancher and Kubernetes show that while upstream Kubernetes remains complex, there’s no shortage of partners willing to offer management features to mitigate that issue, Lyman said.

Beth Pariseau is senior news writer for TechTarget’s Data Center and Virtualization Media Group. Write to her at [email protected] or follow @PariseauTT on Twitter.

IT pros get comfortable with Kubernetes in production

IT pros who’ve run production workloads with Kubernetes for at least a year say it can open up frontiers for IT operations within their organizations.

It’s easier to find instances of Kubernetes in production in the enterprise today versus just a year ago. This is due to the proliferation of commercial platforms that package this open source container orchestration software for enterprise use, such as CoreOS Tectonic and Rancher Labs’ container management product, Rancher. In the two years since the initial release of Kubernetes, early adopters said the platform has facilitated big changes in high availability (HA) and application portability within their organizations.

For example, disaster recovery (DR) across availability zones (AZs) in the Amazon Web Services (AWS) public cloud was notoriously unwieldy with VM-based approaches. Yet, it has become the standard for Kubernetes deployments at SAP’s Concur Technologies during the last 18 months.

Concur first rolled out the open source, upstream Kubernetes project in production to support a receipt image service in December 2015, at a time when clusters that spanned multiple AZs for HA were largely unheard-of, said Dale Ragan, principal software engineer for the firm, based in Bellevue, Wash.

“We wanted to prepare for HA, running it across AZs, rather than one cluster per AZ, which is how other people do it,” Ragan said. “It’s been pretty successful — we hardly ever have any issues with it.”

Ragan’s team seeks 99.999% uptime for the receipt image service, and it’s on the verge of meeting this goal now with Kubernetes in production, Ragan said.

Kubernetes in production offers multicloud multi-tenancy

Kubernetes has spread to other teams within Concur, though those teams run multi-tenant clusters based on CoreOS’s Tectonic, while Ragan’s team sticks to a single-tenant cluster still tied to upstream Kubernetes. The goal is to move that first cluster to CoreOS, as well, though the company must still work out licensing and testing to make sure the receipt imaging app works well on Tectonic, Ragan said. CoreOS has prepared for this transition with recent support for the Terraform infrastructure-as-code tool, with which Ragan’s team underpins its Kubernetes cluster.

CoreOS just released a version of Tectonic that supports automated cluster creation and HA failover across AWS and Microsoft Azure clouds, which is where Concur will take its workloads next, Ragan said.

“Using other cloud providers is a big goal of ours, whether it’s for disaster recovery or just to run a different cluster on another cloud for HA,” Ragan said. With this in mind, Concur has created its own tool to monitor resources in multiple infrastructures called Scipian, which it will soon release to the open source community.

Ragan said the biggest change in the company’s approach to Kubernetes in production has been a move to multi-tenancy in newer Tectonic clusters and the division of shared infrastructures into consumable pieces with role-based access. Network administrators can now provision a network, and it can be consumed by developers that roll out Kubernetes clusters without having to grant administrative access to those developers, for example.

In the next two years, Ragan said he expects to bring the company’s databases into the Kubernetes fold to also gain container-based HA and DR across clouds. For this to happen, the Kubernetes 1.7 additions to StatefulSets and secrets management must emerge from alpha and beta versions as soon as possible; Ragan said he hopes to roll out those features before the end of this year.

Kubernetes in production favors services-oriented approach

Dallas-based consulting firm etc.io uses HA across cloud data centers and service providers for its clients, which it helps to deploy containers. During the most recent Amazon outage, etc.io clients had failover between AWS and public cloud providers OVH and Linode through Rancher’s orchestration of Kubernetes clusters, said E.T. Cook, chief advocate for the firm.

“With Rancher, you can orchestrate domains across multiple data centers or providers,” Cook said. “It just treats them all as one giant intranetwork.”

In the next two years, Cook said he expects Rancher will make not just cloud infrastructures, but container orchestration platforms such as Docker Swarm and Kubernetes interchangeable with little effort. He said he evaluates these two platforms frequently because they change so fast. Cook said it’s too soon to pick a winner in the container orchestration market yet, despite the momentum behind Kubernetes in production at enterprises.

Docker’s most recent Enterprise Edition release favors enterprise approaches to software architectures that are stateful and based on permanent stacks of resources. This is in opposition to Kubernetes, which Cook said he sees as geared toward ephemeral stateless workloads, regardless of its recent additions to StatefulSets and access control features.

It’s like the early days of HD DVD vs. Blu-ray … long term, there may be another major momentum shift.
E.T. Cookchief advocate, etc.io

“Much of the time, there’s no functional difference between Docker Swarm and Kubernetes, but they have fundamentally different ways of getting to that result,” Cook said.

The philosophy behind Kubernetes favors API-based service architecture, where interactions between services are often payloads, and “minions” scale up as loads and queues increase, Cook said. In Docker, by contrast, the user sets up a load balancer, which then forwards requests to scaled services.

“The services themselves are first-class citizens, and the load balancers expose to the services — whereas in the Kubernetes philosophy, the service or endpoint itself is the first-class citizen,” Cook said. “Requests are managed by the service themselves in Kubernetes, whereas in Docker, scaling and routing is done using load balancers to replicated instances of that service.”

The two platforms now compete for enterprise hearts and minds, but before too long, Cook said he thinks it might make sense for organizations to use each for different tasks — perhaps Docker to serve the web front-end and Kubernetes powering the back-end processing.

Ultimately, Cook said he expects Kubernetes to find a long-term niche backing serverless deployments for cloud providers and midsize organizations, while Docker finds its home within the largest enterprises that have the critical mass to focus on scaled services. For now, though, he’s hedging his bets.

“It’s like the early days of HD DVD vs. Blu-ray,” Cook said. “Long term, there may be another major momentum shift — even though, right now, the market’s behind Kubernetes.”

Beth Pariseau is senior news writer for TechTarget’s Data Center and Virtualization Media Group. Write to her at [email protected] or follow @PariseauTT on Twitter.