Tag Archives: Container

Container backup grows, following container adoption

The popularity of container deployments is reaching a tipping point where all backup vendors will eventually need to be able to support them, industry experts said.

As container technology adoption increases, the need for container backup grows. Until now, most containers have been stateless and required no backup.

“We’re going to be seeing more stateful containers, buoyed by the fact that now there’s ways to protect them,” said Steven Hill, senior analyst at 451 Research.

Tom Barton, CEO of container storage vendor Diamanti, said he is seeing more customers’ containers with persistent storage. Barton said when containers replace virtual machines, they require the same data protection and disaster recovery (DR) requirements.

“I think containers will generally displace VMs in the long-run,” Barton said.

Diamanti recently launched the beta version of its Spektra platform, a Kubernetes management layer designed for migrating Kubernetes workloads between on premises and cloud. Spektra enables high availability and DR for Kubernetes workloads, and Barton said Diamanti and its competitors partner with data protection vendors to provide container backup.

Other products that offer container backup include Veritas NetBackup, which introduced its Docker container support at the beginning of this year, and IBM Spectrum Protect, which has recently entered this space by rolling out snapshotting for Kubernetes users.

Hill shared similar beliefs about containers replacing VMs but stressed it will not be a one-for-one replacement. He said economics will always play a role. He said some applications and workloads will remain that make sense to keep on VMs while others will belong on containers. The situation will vary between organizations, but it won’t be fair to say containers are strictly better than VMs, or vice versa.

Screenshot of vProtect version 3.9
Storware vProtect supports a wide variety of hypervisors.

“You never do everything with just the one tool,” Hill said.

Hill also stressed that containers themselves aren’t a mature market or technology yet, and vendors are still waiting to see how organizations are using them. Customers putting mission-critical applications on containers have nudged demand for data protection, backup, recovery, availability and failover — the same kind of capabilities expected in any environment. Vendors are responding to this demand, but the tools aren’t ready yet.

“Protecting stateful containers is still relatively new. The numbers aren’t there to define a real market,” Hill said.

Marc Staimer, president of Dragon Slayer Consulting, said containers still lack the security, flexibility and resilience of VMs. He chalks that up to containers’ lack of maturity. As customers put containers into production, they will realize the technology’s shortcomings, and vendors will develop products and features to address those problems. Staimer said the industry has recently reached a tipping point where there’s enough container adoption to catch vendor interest.

Staimer acknowledged that when containers mature to the same point where hypervisors are now, there will be widespread replacement. Like Hill, he does not expect it to be a wholesale replacement.

“We like to believe these things are winner-takes-all, but they’re not,” Staimer said. “In tech, nothing goes away.”

Staimer said from a technical standpoint, container backup has unique problems that differentiate it from traditional server, VM and SaaS application backup. The core problem is that containers don’t have APIs to allow for backup software to take a snapshot of the state of the container. Most backup vendors install agents in containers to scan and capture what it needs to build a recoverable snapshot. This takes time and resources, which goes against the intent of containers being lightweight VMs.

Trilio CEO David Safaii said installing agents in containers also create extra hassle for developers because they have to go through an IT admin to conduct their backups. He said there’s a “civil war” between IT managers and DevOps. IT managers need to worry about data protection, security and compliance. These are all important and necessary measures, but they can get in the way of DevOps’s philosophy of continuous and agile development.

Trilio recently launched the beta program for its TrilioVault for Kubernetes, which is an agentless container backup offering. Asigra similarly performs container backup without using agents, as does Poland-based Storware’s vProtect.

Storware vProtect started in the container backup space by focusing on open platforms first, protecting Red Hat OpenShift and Kubernetes projects. Storware CTO Paweł Mączka said no one asked for container data protection in the early days because container workloads were microservices and applications.

Mączka saw customers now use containers as they would VMs. DevOps now put databases in containers, shifting them from stateless to stateful. However, Mączka doesn’t see containers taking over and proliferating to the same point as hypervisors such as VMware vSphere and Microsoft Hyper-V, which vProtect only started supporting in its latest version 3.9 update.
“I don’t think they’ll rule the world, but it’s important to have the [container backup] feature,” Mączka said.

Go to Original Article
Author:

Kubernetes tools vendors vie for developer mindshare

SAN DIEGO — The notion that Kubernetes solves many problems as a container orchestration technology belies the complexity it adds in other areas, namely for developers who need Kubernetes tools.

Developers at the KubeCon + CloudNativeCon North America 2019 event here this week noted that although native tooling for development on Kubernetes continues to improve, there’s still room for more.

“I think the tooling thus far is impressive, but there is a long way to go,” said a software engineer and Kubernetes committer who works for a major electronics manufacturer and requested anonymity.

Moreover, “Kubernetes is extremely elegant, but there are multiple concepts for developers to consider,” he said. “For instance, I think the burden of the onboarding process for new developers and even users sometimes can be too high. I think we need to build more tooling, as we flush out the different use cases that communities bring out.”

Developer-oriented approach

Enter Red Hat, which introduced an update of its Kubernetes-native CodeReady Workspaces tool at event.

Red Hat CodeReady Workspaces 2 enables developers to build applications and services on their laptops that mirror the environment they will run in production. And onboarding is but one of the target use cases for the technology, said Brad Micklea, vice president of developer tools, developer programs and advocacy at Red Hat.

The technology is especially useful in situations where security is an issue, such as bringing in new contracting teams or using offshore development teams where developers need to get up and running with the right tools quickly.

I think the tooling thus far is impressive, but there is a long way to go.
Anonymous Kubernetes committer

CodeReady Workspaces runs on the Red Hat OpenShift Kubernetes platform.

Initially, new enterprise-focused developer technologies are generally used in experimental, proof-of-concept projects, said Charles King, an analyst at Pund-IT in Hayward, Calif. Yet over time those that succeed, like Kubernetes, evolve from the proof-of-concept phase to being deployed in production environments.

“With CodeReady Workspaces 2, Red Hat has created a tool that mirrors production environments, thus enabling developers to create and build applications and services more effectively,” King said. “Overall, Red Hat’s CodeReady Workspaces 2 should make life easier for developers.”

In addition to popular features from the first version, such as an in-browser IDE, Lightweight Directory Access Protocol support, Active Directory and OpenAuth support as well as one-click developer workspaces, CodeReady Workspaces 2 adds support for Visual Studio Code extensions, a new user interface, air-gapped installs and a shareable workspace configuration known as Devfile.

“Workspaces is just generally kind of a way to package up a developer’s working workspace,” Red Hat’s Micklea said.

Overall, the Kubernetes community is primarily “ops-focused,” he said. However, tools like CodeReady Workspaces help to empower both developers and operations.

For instance, at KubeCon, Amr Abdelhalem, head of the cloud platform at Fidelity Investments, said the way he gets teams initiated with Kubernetes is to have them deliver on small projects and move on from there. CodeReady Workspaces is ideal for situations like that because it simplifies developer adoption of Kubernetes, Micklea said.

Such a tool could be important for enterprises that are banking on Kubernetes to move them into a DevOps model to achieve business transformation, said Charlotte Dunlap, an analyst with GlobalData.

“Vendors like Red Hat are enhancing Kubernetes tools and CLI [Command Line Interface] UIs to bring developers with more access and visibility into the ALM [Application Lifecycle Management] of their applications,” Dunlap said. “Red Hat CodeReady Workspaces is ultimately about providing enterprises with unified management across endpoints and environments.”

Competition for Kubernetes developer mindshare

Other companies that focus on the application development platform, such as IBM and Pivotal, have also joined the Kubernetes developer enablement game.

Earlier this week, IBM introduced a set of new open-source tools to help ease developers’ Kubernetes woes. Meanwhile, at KubeCon this week, Pivotal made its Pivotal Application Service (PAS) on Kubernetes generally available and also delivered a new release of the alpha version of its Pivotal Build Service. The PAS on Kubernetes tool enables developers to focus on coding while the platform automatically handles software deployment, networking, monitoring, and logging.

The Pivotal Build Service enables developers to build containers from source code for Kubernetes, said James Watters, senior vice president of strategy at Pivotal. The service automates container creation, management and governance at enterprise scale, he said.

The build service brings technologies such as Pivotal’s kpack and Cloud Native Buildpacks to the enterprise. Cloud Native Buildpacks address dependencies in the middleware layer, such as language-specific frameworks. Kpack is a set of resource controllers for Kubernetes. The Build Service defines the container image, its contents and where it should be kept, Watters said.

Indeed, Watters said he believes it just might be game over in the Kubernetes tools space because Pivotal owns the Spring Framework and Spring Boot, which appeal to a wide swath of Java developers, which is “one of the most popular ways enterprises build applications today,” he said.

“There is something to be said for the appeal of Java in that my team would not need to make wholesale changes to our build processes,” said a Java software developer for a financial services institution who requested anonymity because he was not cleared to speak for the organization.

Yet, in today’s polyglot programming world, programming language is less of an issue as teams have the capability to switch languages at will. For instance, Fidelity’s Abdelhalem said his teams find it easier to move beyond a focus strictly on tools and more on overall technology and strategy to determine what fits in their environment.

Go to Original Article
Author:

Kubernetes security opens a new frontier: multi-tenancy

SAN DIEGO — As enterprises expand production container deployments, a new Kubernetes security challenge has emerged: multi-tenancy.

Among the many challenges with multi-tenancy in general is that it is not easy to define, and few IT pros agree on a single definition or architectural approach. Broadly speaking, however, multi-tenancy occurs when multiple projects, teams or tenants, share a centralized IT infrastructure, but remain logically isolated from one another.

Kubernetes multi-tenancy also adds multilayered complexity to an already complex Kubernetes security picture, and demands that IT pros wire together a stack of third-party and, at times, homegrown tools on top of the core Kubernetes framework.

This is because core upstream Kubernetes security features are limited to service accounts for operations such as role-based access control — the platform expects authentication and authorization data to come from an external source. Kubernetes namespaces also don’t offer especially granular or layered isolation by default. Typically, each namespace corresponds to one tenant, whether that tenant is defined as an application, a project or a service.

“To build logical isolation, you have to add a bunch of components on top of Kubernetes,” said Karl Isenberg, tech lead manager at Cruise Automation, a self-driving car service in San Francisco, in a presentation about Kubernetes multi-tenancy here at KubeCon + CloudNativeCon North America 2019 this week. “Once you have Kubernetes, Kubernetes alone is not enough.”

Karl Isenberg, Cruise Automation
Karl Isenberg, tech lead manager at Cruise Automation, presents at KubeCon about multi-tenant Kubernetes security.

However, Isenberg and other presenters here said Kubernetes multi-tenancy can have significant advantages if done right. Cruise, for example, runs very large Kubernetes clusters, with up to 1,000 nodes, shared by thousands of employees, teams, projects and some customers. Kubernetes multi-tenancy means more highly efficient clusters and cost savings on data center hardware and cloud infrastructure.

“Lower operational costs is another [advantage] — if you’re starting up a platform operations team with five people, you may not be able to manage five [separate] clusters,” Isenberg added. “We [also] wanted to make our investments in focused areas, so that they applied to as many tenants as possible.”

Multi-tenant Kubernetes security an ad hoc practice for now

The good news for enterprises that want to achieve Kubernetes multi-tenancy securely is that there are a plethora of third-party tools they can use to do it, some of which are sold by vendors, and others open sourced by firms with Kubernetes development experience, including Cruise and Yahoo Media.

Duke Energy Corporation, for example, has a 60-node Kubernetes cluster in production that’s stretched across three on-premises data centers and shared by 100 web applications so far. The platform is comprised of several vendors’ products, from Diamanti hyper-converged infrastructure to Aqua Security Software’s container firewall, which logically isolates tenants from one another at a granular level that accounts for the ephemeral nature of container infrastructure.

“We don’t want production to talk to anyone [outside of it],” said Ritu Sharma, senior IT architect at the energy holding company in Charlotte, N.C., in a presentation at KubeSec Enterprise Summit, an event co-located with KubeCon this week. “That was the first question that came to mind — how to manage cybersecurity when containers can connect service-to-service within a cluster.”

Some Kubernetes multi-tenancy early adopters also lean on cloud service providers such as Google Kubernetes Engine (GKE) to take on parts of the Kubernetes security burden. GKE can encrypt secrets in the etcd data store, which became available in Kubernetes 1.13, but isn’t enabled by default, according to a KubeSec presentation by Mike Ruth, one of Cruise’s staff security engineers.

Google also offers Workload Identity, which matches up GCP identity and access management with Kubernetes service accounts so that users don’t have to manage Kubernetes secrets or Google Cloud IAM service account keys themselves. Kubernetes SIG-Auth looks to modernize how Kubernetes security tokens are handled by default upstream to smooth Kubernetes secrets management across all clouds, but has run into snags with the migration process.

In the meantime, Verizon’s Yahoo Media has donated a project called Athenz to open source, which handles multiple aspects of authentication and authorization in its on-premises Kubernetes environments, including automatic secrets rotation, expiration and limited-audience policies for intracluster communication similar to those offered by GKE’s Workload Identity. Cruise also created a similar open source tool called RBACSync, along with Daytona, a tool that fetches secrets from HashiCorp Vault, which Cruise uses to store secrets instead of in etcd, and injects them into running applications, and k-rail for workload policy enforcement.

Kubernetes Multi-Tenancy Working Group explores standards

While early adopters have plowed ahead with an amalgamation of third-party and homegrown tools, some users in highly regulated environments look to upstream Kubernetes projects to flesh out more standardized Kubernetes multi-tenancy options.

For example, investment banking company HSBC can use Google’s Anthos Config Management (ACM) to create hierarchical, or nested, namespaces, which make for more highly granular access control mechanisms in a multi-tenant environment, and simplifies their management by automatically propagating shared policies between them. However, the company is following the work of a Kubernetes Multi-Tenancy Working Group established in early 2018 in the hopes it will introduce free open source utilities compatible with multiple public clouds.

Sanjeev Rampal, Kubernetes Multi-Tenancy Working Group
Sanjeev Rampal, co-chair of the Kubernetes Multi-Tenancy Working Group, presents at KubeCon.

“If I want to use ACM in AWS, the Anthos license isn’t cheap,” said Scott Surovich, global container engineering lead at HSBC, in an interview after a presentation here. Anthos also requires VMware server virtualization, and hierarchical namespaces available at the Kubernetes layer could offer Kubernetes multi-tenancy on bare metal, reducing the layers of abstraction and potentially improving performance for HSBC.

Homegrown tools for multi-tenant Kubernetes security won’t fly in HSBC’s highly regulated environment, either, Surovich said.

“I need to prove I have escalation options for support,” he said. “Saying, ‘I wrote that’ isn’t acceptable.”

So far, the working group has two incubation projects that create custom resource definitions — essentially, plugins — that support hierarchical namespaces and virtual clusters that create self-service Kubernetes API Servers for each tenant. The working group has also created working definitions of the types of multi-tenancy and begun to define a set of reference architectures.

The working group is also considering certification of multi-tenant Kubernetes security and management tools, as well as benchmark testing and evaluation of such tools, said Sanjeev Rampal, a Cisco principal engineer and co-chair of the group.

Go to Original Article
Author:

Enterprise IT weighs pros and cons of multi-cloud management

Multi-cloud management among enterprise IT shops is real, but the vision of routine container portability between clouds has yet to be realized for most.

Multi-cloud management is more common as enterprises embrace public clouds and deploy standardized infrastructure automation platforms, such as Kubernetes, within them. Most commonly, IT teams look to multi-cloud deployments for workload resiliency and disaster recovery, or as the most reasonable approach to combining companies with loyalty to different public cloud vendors through acquisition.

“Customers absolutely want and need multi-cloud, but it’s not the old naïve idea about porting stuff to arbitrage a few pennies in spot instance pricing,” said Charles Betz, analyst at Forrester Research. “It’s typically driven more by governance and regulatory compliance concerns, and pragmatic considerations around mergers and acquisitions.”

IT vendors have responded to this trend with a barrage of marketing around tools that can be used to deploy and manage workloads across multiple clouds. Most notably, IBM’s $34 billion bet on Red Hat revolves around multi-cloud management as a core business strategy for the combined companies, and Red Hat’s OpenShift Container Platform version 4.2 updated its Kubernetes cluster installer to support more clouds, including Azure and Google Cloud Platform. VMware and Rancher also use Kubernetes to anchor multi-cloud management strategies, and even cloud providers such as Google offer products such as Anthos with the goal of managing workloads across multiple clouds.

For some IT shops, easier multi-cloud management is a key factor in Kubernetes platform purchasing decisions.

“Every cloud provider has hosted Kubernetes, but we went with Rancher because we want to stay cloud-agnostic,” said David Sanftenberg, DevOps engineer at Cardano Risk Management Ltd, an investment consultancy firm in the U.K. “Cloud outages are rare, but it’s nice to know that on a whim we can spin up a cluster in another cloud.”

Multi-cloud management requires a deliberate approach

With Kubernetes and VMware virtual machines as common infrastructure templates, some companies use multiple cloud providers to meet specific business requirements.

Unified communications-as-a-service provider 8×8, in San Jose, Calif., maintains IT environments spread across 15 self-managed data centers, plus AWS, Google Cloud Platform, Tencent and Alibaba clouds. Since the company’s business is based on connecting clients through voice and video chat globally, placing workloads as close to customers’ locations as possible is imperative, and this makes managing multiple cloud service providers worthwhile. The company’s IT ops team keeps an eye on all its workloads with VMware’s Wavefront cloud  monitoring tool.

Dejan Deklich, chief product officer, 8x8Dejan Deklich

 “It’s all the same [infrastructure] templates, and all the monitoring and dashboards stay exactly the same, and it doesn’t really matter where [resources] are deployed,” said Dejan Deklich, chief product officer at 8×8. “Engineers don’t have to care where workloads are.”

Multiple times a year, Deklich estimated, the company uses container portability to move workloads between clouds when it gets a good deal on infrastructure costs, although it doesn’t move them in real time or spread apps among multiple clouds. Multi-cloud migration also only applies to a select number of 8×8’s workloads, Deklich said.

We made a conscious decision that we want to be able to move from cloud to cloud. It depends on how deep you go into integration with a given cloud provider.
Dejan DeklichChief product officer, 8×8

“If you’re in [AWS] and using RDS, you’re not going to be able to move to Oracle Cloud, or you’re going to suffer connectivity issues; you can make it work, but why would you?” he said. “There are workloads that can elegantly be moved, such as real-time voice or video distribution around the world, or analytics, as long as you have data associated with your processing, but moving large databases around is not a good idea.”

Maintaining multi-cloud portability also requires a deliberate approach to integration with each cloud provider.

“We made a conscious decision that we want to be able to move from cloud to cloud,” Deklich said. “It depends on how deep you go into integration with a given cloud provider — moving a container from one to the other is no problem if the application inside is not dependent on a cloud-specific infrastructure.”

The ‘lowest common denominator’ downside of multi-cloud

Not every organization buys in to the idea that multi-cloud management’s promise of freedom from cloud lock-in is worthwhile, and the use of container portability to move apps from cloud to cloud remains rare, according to analysts.

“Generally speaking, companies care about portability from on-premises environments to public cloud, not wanting to get locked into their data center choices,” said Lauren Nelson, analyst at Forrester Research. “They are far less cautious when it comes to getting locked into public cloud services, especially if that lock in comes with great value.”

Generally speaking, companies care about portability from on-premises environments to public cloud, not wanting to get locked into their data center choices. They are far less cautious when it comes to getting locked into public cloud services …
Lauren NelsonAnalyst, Forrester Research

In fact, some IT pros argue that lock-in is preferable to missing out on the value of cloud-specific secondary services, such as AWS Lambda.

“I am staunchly single cloud,” said Robert Alcorn, chief architect of platform and product operations at Education Advisory Board (EAB), a higher education research firm headquartered in Washington, D.C. “If you look at how AWS has accelerated its development over the last year or so, it makes multi-cloud almost a nonsensical question.”

For Alcorn, the value of integrating EAB’s GitLab pipelines with AWS Lambda outweighs the risk of lock-in to the AWS cloud. Connecting AWS Lambda and API Gateway to Amazon’s SageMaker for machine learning  has also represented almost a thousandfold drop in costs compared to the company’s previous container-based hosting platform, he said.

Even without the company’s interest in Lambda integration, the work required to keep applications fully cloud-neutral isn’t worth it for his company, Alcorn said.

“There’s a ceiling to what you can do in a truly agnostic way,” he said. “Hosted cloud services like ECS and EKS are also an order of magnitude simpler to manage. I don’t want to pay the overhead tax to be cloud-neutral.”

Some IT analysts also sound a note of caution about the value of multi-cloud management for disaster recovery or price negotiations with cloud vendors, depending on the organization. For example, some financial regulators require multi-cloud deployments for risk mitigation, but the worst case scenario of a complete cloud failure or the closure of a cloud provider’s entire business is highly unlikely, Forrester’s Nelson wrote in a March 2019 research report, “Assess the Pain-Gain Tradeoff of Multicloud Strategies.”

Splitting cloud deployments between multiple providers also may not give enterprises as much of a leg up in price negotiations as they expect, unless the customer is a very large organization, Nelson wrote in the report.

The risks of multi-cloud management are also manifold, according to Nelson’s report, from high costs for data ingress and egress between clouds to network latency and bandwidth issues, broader skills requirements for IT teams, and potentially double the resource costs to keep a second cloud deployment on standby for disaster recovery.

Of course, value is in the eye of the beholder, and each organization’s multi-cloud mileage may vary.

“I’d rather spend more for the company to be up and running, and not lose my job,” Cardano’s Sanftenberg said.

Go to Original Article
Author:

IT experts exchange container security tips and caveats

News
Stay informed about the latest enterprise technology news and product updates.

Real-world container security requires users to dig in to the finer points of container, host, Kubernetes and application configurations.


BOSTON — Blue-chip IT shops have established production container orchestration deployments. Now, the question…

“;
}
});

/**
* remove unnecessary class from ul
*/
$(“#inlineregform”).find( “ul” ).removeClass(“default-list”);

/**
* Replace “errorMessageInput” class with “sign-up-error-msg” class
*/
function renameErrorMsgClass() {
$(“.errorMessageInput”).each(function() {
if ($(this).hasClass(“hidden”)) {
$(this).removeClass(“errorMessageInput hidden”).addClass(“sign-up-error-msg hidden”);
} else {
$(this).removeClass(“errorMessageInput”).addClass(“sign-up-error-msg”);
}
});
}

/**
* when validation function is called, replace “errorMessageInput” with “sign-up-error-msg”
* before return
*/
function validateThis(v, form) {
var validateReturn = urValidation.validate(v, form);
renameErrorMsgClass();
return validateReturn;
}

/**
* DoC pop-up window js – included in moScripts.js which is not included in responsive page
*/
$(“#inlineRegistration”).on(“click”,”a.consentWindow”, function(e) {
window.open(this.href, “Consent”, “width=500,height=600,scrollbars=1”);
e.preventDefault();
});

is how to make them fully secure within large, multi-tenant infrastructures.

For starters, users must make changes to default settings in both Docker and Kubernetes to close potential security loopholes. For example, a default component of a container image, called docker.sock, that’s mounted without proper security tools to control its usage is vulnerable to an attack that can use it to access the host operating system and then back-end databases to exfiltrate data. Similarly, the Kubernetes API’s default setting could potentially let containers access host operating systems through a malicious pod.

“Containers also have the same problem as any other VM: They can talk to each other via an internal network,” said Jason Patterson, application security architect at NCR Corp., an Atlanta-based maker of financial transaction systems for online banking and retailers, in a presentation at the DevSecCon security conference held here this week. “That means that one misconfiguration can compromise pretty much all the containers in the environment.”

Container security configuration settings are critical

NCR uses Red Hat’s OpenShift, which restricts the Kubernetes API settings out of the box, but OpenShift users must set up security context constraints, Patterson said.

Etienne Stalmans at DevSecCon
Heroku’s Etienne Stalmans presents on container security at DevSecCon.

In general, it’s best to constrain a user’s permissions and each container’s capabilities as tightly as possible and, ideally, configure container images to whitelist only the calls and actions they’re authorized to perform — but this is still uncommon, he added.

It’s possible to limit what a container root user can do outside the container or the host on which the container runs, said Etienne Stalmans, senior security engineer at Heroku, based in San Francisco, in a separate DevSecCon presentation. To do this, container administrators can adjust settings in seccomp, an application sandboxing mechanism in the Linux kernel, and configure application permissions or capabilities.

“That still makes them a privileged user, but not outside the container,” Stalmans said. “Overall, it’s best to drop all capabilities for all container users, and then add them back in as required.”

Some highly sensitive applications require isolation provided by a hypervisor to remove any possibility that an attacker can gain host access. Vendors such as Intel, Google and Microsoft offer modified hypervisors specifically tuned for container isolation.

DevSecCon presenters also touched on tools that can be used to minimize the attack surface of container and host operating systems.

Beeline, which sells workforce management and vendor management software, uses an Oracle tool called Smith that strips out unneeded OS functions. “That shrank our Docker image sizes from as much as 65 MB to 800 KB to 2 MB,” said Jason Looney, enterprise architect at Beeline, based in Jacksonville, Fla.

Container security experts weigh host vs. API vulnerabilities

Overall, it’s best to drop all capabilities for all container users, and then add them back in as required.
Etienne Stalmanssenior security engineer, Heroku

Most of the best-known techniques in container security restrict attackers’ access to hosts and other back-end systems from a compromised container instance. But prevention of unauthorized access to APIs is critical, too, as attackers in recent high-profile attacks on AWS-based systems targeted vulnerable APIs, rather than hosts, said Sam Bisbee, chief security officer of Boston-based IT security software vendor Threat Stack, in a DevSecCon presentation.

Attackers don’t necessarily look for large amounts of data, Bisbee added. “Your security policy must cover the whole infrastructure, not just important data,” he said.

Kubernetes version 1.8 improved API security with a switch from attribute-based access control to role-based access control (RBAC). And most installers and providers of Kubernetes, including cloud container services, now have RBAC Kubernetes API access by default. But users should go further with configuration settings that prevent untrusted pods from talking to the Kubernetes API, Stalmans said.

“There is some discussion [in the Kubernetes community] to make that the default setting,” he said. It’s also possible to do this programmatically from container networking utilities, such as Calico, Istio and Weave. But “that means we’re back to firewall rules” until a new default is decided, he said.

Dig Deeper on Managing Virtual Containers

IT pros debate upstream vs. packaged Kubernetes implementations

Packaged versions of Kubernetes promise ease of use for the finicky container orchestration platform, but some enterprises will stick with a DIY approach to Kubernetes implementation.

Red Hat, Docker, Heptio, Mesosphere, Rancher, Platform9, Pivotal, Google, Microsoft, IBM and Cisco are among the many enterprise vendors seeking to cash in on the container craze with prepackaged Kubernetes implementations for private and hybrid clouds. Some of these products — such Red Hat’s OpenShift Container Platform, Docker Enterprise Edition and Rancher’s eponymous platform — offer their own distribution of the container orchestration software, and most add their own enterprise security and management features on top of upstream Kubernetes code.

However, some enterprise IT shops still prefer to download Kubernetes source code from GitHub and leave out IT vendor middlemen.

“We’re seeing a lot of companies go with [upstream] Kubernetes over Docker [Enterprise Edition] and [Red Hat] OpenShift,” said Damith Karunaratne, director of client solutions for Indellient Inc., an IT consulting firm in Oakville, Ont. “Those platforms may help with management out of the gate, but software license costs are always a consideration, and companies are confident in their technical teams’ expertise.”

The case for pure upstream Kubernetes

One such company is Rosetta Stone, which has used Docker containers in its DevOps process for years, but has yet to put a container orchestration tool into production. In August 2017, the company considered Kubernetes overkill for its applications and evaluated Docker swarm mode as a simpler approach to container orchestration.

Fast-forward a year, however, and the global education software company plans to introduce upstream Kubernetes into production due to its popularity and ubiquity as the container orchestration standard in the industry.

Concerns about Kubernetes management complexity are outdated, given how the latest versions of the tool smooth out management kinks and require less customization for enterprise security features, said Kevin Burnett, DevOps lead for Rosetta Stone in Arlington, Va.

“We’re a late adopter, but we have the benefit of more maturity in the platform,” Burnett said. “We also wanted to avoid [licensing] costs, and we already have servers. Eventually, we may embrace a cloud service like Google Kubernetes Engine more fully, but not yet.”

Burnett said his team prefers to hand-roll its own configurations of open source tools, and it doesn’t want to use features from a third-party vendor’s Kubernetes implementation that may hinder cloud portability in the future.

Other enterprise IT shops are concerned that third-party Kubernetes implementations — particularly those that rely on a vendor’s own distribution of Kubernetes, such as Red Hat’s OpenShift — will be easier to install initially, but could worsen management complexity in the long run.

“Container sprawl combined with a forked Kubernetes runtime in the hands of traditional IT ops is a management nightmare,” said a DevOps transformation leader at an insurance company who spoke on condition of anonymity, because he’s not authorized to publicly discuss the company’s product evaluation process.

His company is considering OpenShift because of an existing relationship with the vendor, but adding a new layer of orchestration and managing multiple control planes for VMs and containers would also be difficult, the DevOps leader predicted, particularly when it comes to IT ops processes such as security patching.

“Why invite that mess when you already have your hands full with a number of packaged containers that you’re going to have to develop security patching processes for?” he said.

Vendors’ Kubernetes implementations offer stability, support

Fork is a fighting word in the open source world, and most vendors say their Kubernetes implementations don’t diverge from pure Kubernetes code. And early adopters of vendors’ Kubernetes implementations said enterprise support and security features are the top priorities as they roll out container orchestration tools, rather than conformance with upstream code, per se.

Amadeus, a global travel technology company, is an early adopter of Red Hat OpenShift. As such, Dietmar Fauser, vice president of core platforms and middleware at Amadeus, said he doesn’t worry about security patching or forked Kubernetes from Red Hat. While Red Hat could theoretically choose to deviate from, or fork, upstream Kubernetes, it hasn’t done so, and Fauser said he doubts the vendor ever will.

Meanwhile, Amadeus is on the cusp of multi-cloud container portability, with instances of OpenShift on Microsoft Azure, Google and AWS public clouds in addition to its on-premises data centers. Fauser said he expects the multi-cloud deployment process will go smoothly under OpenShift.

Multi-tenancy support and a DevOps platform on top of Kubernetes were what made us want to go with third-party vendors.
Surya Suravarapuassistant vice president of product development, Change Healthcare

“Red Hat is very good at maintaining open source software distributions, patching is consistent and easy to maintain, and I trust them to maintain a portable version of Kubernetes,” Fauser said. “Some upstream Kubernetes APIs come and go, but Red Hat’s approach offers stability.”

Docker containers and Kubernetes are de facto standards that span container environments and provide portability, regardless of which vendor’s Kubernetes implementation is in place, said Surya Suravarapu, assistant vice president of product development for Change Healthcare, a healthcare information technology company in Nashville, Tenn., that spun out of McKesson in March 2017.

Suravarapu declined to specify which vendor’s container orchestration tools the company uses, but said Change Healthcare uses multiple third-party Kubernetes tools and plans to put containers into production this quarter.

“Multi-tenancy support and a DevOps platform on top of Kubernetes were what made us want to go with third-party vendors,” Suravarapu said. “The focus is on productivity improvements for our IT teams, where built-in tooling converts code to container images with the click of a button or one CLI [command-line interface] line, and compliance and security policies are available to all product teams.”

A standard way to manage containers in Kubernetes offers enough consistency between environments to improve operational efficiency, while portability between on-premises, public cloud and customer environments is a longer-term goal, Suravarapu said.

“We’re a healthcare IT company,” he added. “We can’t just go with a raw tool without 24/7 enterprise-level support.”

Still, Amadeus’s Fauser acknowledged there’s risk to trust one vendor’s Kubernetes implementation, especially when that implementation is one of the more popular market options.

“Red Hat wants to own the whole ecosystem, so there’s the danger that they could limit other companies’ access to providing plug-ins for their platform,” he said.

That hasn’t happened, but the risk exists, Fauser said.

Container security emerges in IT products enterprises know and trust

Container security has arrived from established IT vendors that enterprises know and trust, but startups that were first to market still have a lead, with support for cloud-native tech.

Managed security SaaS provider Alert Logic this week became the latest major vendor to throw its hat into the container security ring, a month after cloud security and compliance vendor Qualys added container security support to its DevSecOps tool.

Container security monitoring is now a part of Alert Logic’s Cloud Defender and Threat Manager intrusion detection systems (IDSes). Software agents deployed on each host inside a privileged container monitor network traffic between containers within that host, as well as between hosts for threats. A web application firewall blocks suspicious traffic Threat Manager finds between containers, and Threat Manager offers remediation recommendations to address any risks that remain in the infrastructure.

Accesso Technology Group bought into Alert Logic’s IDS products in January 2018 because it supports VM-based and bare-metal infrastructure, and planned container support was a bonus.

“They gave us a central location to monitor our physical data centers, remote offices and multiple public clouds,” said Will DeMar, director of information security at Accesso, a ticketing and e-commerce service provider in Lake Mary, Fla.

DeMar beta-tested the Threat Manager features and has already deployed them with production Kubernetes clusters in Google Kubernetes Engine and AWS Elastic Compute Cloud environments, though Alert Logic’s official support for its initial release is limited to AWS.

Immediate visibility into intrusion and configuration issues … [is] critical to our DevOps process.
Will DeMarDirector of information security, Accesso

“We have [AWS] CloudFormation and [HashiCorp] Terraform scripts that put Alert Logic onto every new Kubernetes host, which gives us immediate visibility into intrusion and configuration issues,” DeMar said. “It’s critical to our DevOps process.”

A centralized view of IT security in multiple environments and “one throat to choke” in a single vendor appeals to DeMar, but he hasn’t ruled out tools from Alert Logic’s startup competitors, such as Aqua Security, NeuVector and Twistlock, which he sees as complementary to Alert Logic’s product.

“Aqua and Twistlock are more container security-focused than intrusion detection-focused,” DeMar said. “They help you check the configuration on your container before you release it to the host; Alert Logic doesn’t help you there.”

Container security competition escalates

Alert Logic officials, however, do see Aqua Security, Twistlock and their ilk as competitors, and the container image scanning ability DeMar referred to is on the company’s roadmap for Threat Manager in the next nine months. Multiple layers of infrastructure are involved to secure Docker containers, and Alert Logic positions its container security approach as network-based IDS, as opposed to host-based IDS. The company said network-based IDS more deeply inspects real-time network traffic at the packet level, whereas startups’ products examine only where that network traffic goes between hosts.

lert Logic Threat Manager UI
Alert Logic’s Threat Manager offers container security remediation recommendations.

Aqua Security co-founder and CTO Amir Jerbi, of course, sees things differently.

“Traditional security tools are trying to shift into containers and still talk in traditional terms about the host and network,” Jerbi said. “Container security companies like ours don’t distinguish between network, host and other levels of access — we protect the container, through a mesh of multiple disciplines.”

That’s the major distinction for enterprise end users: whether they prefer container security baked into broader, traditional products or as the sole focus of their vendor’s expertise. Aqua Security version 3.2, also released this week, added support for container host monitoring where thin OSes are used, but the tool isn’t a good fit in VM or bare-metal environments where containers aren’t present, Jerbi said.

Aqua Security’s tighter focus means it has a head start on the latest and greatest container security features. For example, version 3.2 includes the ability to customize and build a whitelist of system calls containers make, which is still on the roadmap for Alert Logic. Version 3.2 also adds support for static AWS Lambda function monitoring, with real-time Lambda security monitoring already on the docket. Aqua Security was AWS’ partner for container security with Fargate, while Alert Logic must still catch up there as well.

Industry watchers expect this dynamic to continue for the rest of 2018 and predict that incumbent vendors will snap up startups in an effort to get ahead of the curve.

“Everyone sees the same hill now, but they approach it from different viewpoints, more aligned with developers or more aligned with IT operations,” said Fernando Montenegro, analyst with 451 Research. “As the battle lines become better defined, consolidation among vendors is still a possibility, to strengthen the operations approach where vendors are already focused on developers and vice versa.”

Insider preview: Windows container image

Earlier this year at Microsoft Build 2018, we announced a third container base image for applications that have additional API dependencies beyond nano server and Windows Server Core. Now the time has finally come and the Windows container image is available for Windows Insiders.

Why another container image?

In conversations with IT Pros and developers there were some themes coming up which went beyond the nanoserver and windowsservercore container images:
Quite a few customers were interested in moving their legacy applications into containers to benefit from container orchestration and management technologies like Kubernetes. However, not all applications could be easily containerized, in some cases due to missing components like proofing support which is not included in Windows Server Core.
Others wanted to leverage containers to run automated UI tests as part of their CI/CD processes or use other graphics capabilities like DirectX which are not available within the other container images.

With the new windows container image, we’re now offering a third option to choose from based on the requirements of the workload. We’re looking forward to see what you will build!

How can you get it?

If you are running a container host on Windows Insider build 17704, you can get this container image using the following command:

docker pull mcr.microsoft.com/windows-insider:10.0.17704.1000

To simply get the latest available version of the container image, you can use the following command:

docker pull mcr.microsoft.com/windows-insider:latest

Please note that for compatibility reasons we recommend running the same build version for the container host and the container itself.

Since this image is currently part of the Windows Insider preview, we’re looking forward to your feedback, bug reports, and comments. We will be publishing newer builds of this container image along with the insider builds.

Alles Gute,
Lars

Container orchestration systems at risk by being web-accessible

Researchers found more than 21,000 container orchestration systems are at risk simply because they are accessible via the web.

Security researchers from Lacework, a cloud security vendor based in Mountain View, Calif., searched for popular container orchestration systems, like Kubernetes, Docker Swarm, Mesosphere and OpenShift, and they found tens of thousands of administrator dashboards were accessible on the internet. According to Lacework’s report, this exposure alone could leave organizations at risk because of the “potential for attack points caused by poorly configured resources, lack of credentials and the use of nonsecure protocols.”

“There are typically two critical pieces to managing these systems. First is a web UI and associated APIs. Secondly, an administrator dashboard and API are popular because they allow users to essentially run all aspects of a container cluster from a single interface,” Lacework’s researchers wrote in its report. “Access to the dashboard gives you top-level access to all aspects of administration for the cluster it is assigned to manage, [including] managing applications, containers, starting workloads, adding and modifying applications, and setting key security controls.”

Dan Hubbard, chief security architect at Lacework, said these cloud container orchestration systems represent a significant change from traditional security.

“In the old data center days, it was easy to set policy around who could access admin consoles, as you would simply limit it to your corporate network and trusted areas. The cloud, combined with our need to work from anywhere, changes this dramatically, and there are certainly use cases to allow remote administration over the internet,” Hubbard said via email. “That said, it should be done in a secure way. Extra security measures like multifactor authentication, enforced SSL, [role-based access controls], a proxy in front of the server to limit access or a ‘jump server’ are all ways to do this. This is something that security needs to be aware of.”

Lacework reported that more than 300 of the exposed container orchestration systems’ dashboards did not have credentials implemented to limit access, and “38 servers running healthz [web application health and security checker] live on the Internet with no authentication whatsoever were discovered.”

Hubbard added that “these sites had security weaknesses that could have enabled hackers to either attack directly these nodes or provide hackers with information that would allow them to attack more easily the company owning these nodes.” 

However, despite warning of potential risks to these container orchestration systems, Hubbard and Lacework could not expand on specific threats facing any of the nearly 22,000 accessible dashboards described in the report.

“Technically, they are all connected to the internet and their ports are open, so attackers can gain privileged access or discover information about the target,” Hubbard said. “With respect to flaws, we did not perform any password cracking or dictionary attacks against the machines or vulnerability scans. However, we did notice that a lot of the machines had other services open besides the container orchestration, and that certainly increases the attack surface.”

Red Hat and Microsoft co-develop the first Red Hat OpenShift jointly managed service on a public cloud

Microsoft and Red Hat expand partnership around hybrid cloud, container management and developer productivity

SAN FRANCISCO — May 8, 2018 — Microsoft Corp. (Nasdaq “MSFT”) and Red Hat Inc. (NYSE: RHT) on Tuesday expanded their alliance to empower enterprise developers to run container-based applications across Microsoft Azure and on-premises. With this collaboration, the companies will introduce the first jointly managed OpenShift offering in the public cloud, combining the power of Red Hat OpenShift, the industry’s most comprehensive enterprise Kubernetes platform, and Azure, Microsoft’s public cloud.

“Gartner predicts that, by 2020, more than 50% of global organizations will be running containerized applications in production, up from less than 20% today.”1

With organizations turning to containerized applications and Kubernetes to drive digital transformation and help address customer, competitive and market demands, they need solutions to easily orchestrate and manage these applications, across the public cloud and on-premises. Red Hat OpenShift on Azure will be jointly engineered and designed to reduce the complexity of container management for customers. As the companies’ preferred offering for hybrid container workflows for our joint customers, Red Hat and Microsoft will jointly manage the solution for customers, with support from both companies.

In addition to being a fully managed service, Red Hat OpenShift on Azure will bring enterprise developers:

  • Flexibility: Freely move applications between on-premises environments and Azure using OpenShift, which offers a consistent container platform across the hybrid cloud.
  • Speed: Connect faster, and with enhanced security, between Azure and on-premises OpenShift clusters with hybrid networking.
  • Productivity: Access Azure services like Azure Cosmos DB, Azure Machine Learning and Azure SQL DB, making developers more productive.

When customers choose Red Hat OpenShift on Azure, they will receive a managed service backed by operations and support services from both companies. Support extends across their containerized applications, operating systems, infrastructure and the orchestrator. Further, Red Hat’s and Microsoft’s sales organizations will work together to bring the companies’ extensive technology platforms to customers, equipping them to build more cloud-native applications and modernize existing applications.

Customers can more easily move their applications between on-premises environments and Microsoft Azure because they are leveraging a consistent container platform in OpenShift across both footprints of the hybrid cloud.

The expanded collaboration between Microsoft and Red Hat will also include:

  • Enabling the hybrid cloud with full support for Red Hat OpenShift Container Platform on-premises and on Microsoft Azure Stack, enabling a consistent on- and off-premises foundation for the development, deployment and management of cloud-native applications on Microsoft infrastructure. This provides a pathway for customers to pair the power of the Azure public cloud with the flexibility and control of OpenShift on-premises on Azure Stack.
  • Multiarchitecture container management that spans both Windows Server and Red Hat Enterprise Linux containers. Red Hat OpenShift on Microsoft Azure will consistently support Windows containers alongside Red Hat Enterprise Linux containers, offering a uniform orchestration platform that spans the leading enterprise platform providers.
  • More ways to harness data with expanded integration of Microsoft SQL Server across the Red Hat OpenShift landscape. This will soon include SQL Server as a Red Hat certified container for deployment on Red Hat OpenShift on Azure and Red Hat OpenShift Container Platform across the hybrid cloud, including Azure Stack.
  • More ways for developers to use Microsoft tools with Red Hat as Visual Studio Enterprise and Visual Studio Professional subscribers will get Red Hat Enterprise Linux credits. For the first time, developers can work with .NET, Java, or the most popular open source frameworks on this single, and supported, platform.

Availability

Red Hat OpenShift on Azure is anticipated to be available in preview in the coming months. Red Hat OpenShift Container Platform and Red Hat Enterprise Linux on Azure and Azure Stack are currently available.

Supporting Quotes

Paul Cormier, president, Products and Technologies, Red Hat

“Very few organizations are able to fully silo their IT operations into a solely on-premises or public cloud footprint; instead, it’s a hybrid mixture of these environments that presents a path toward digital transformation. By extending our partnership with Microsoft, we’re able to offer the industry’s most comprehensive Kubernetes platform on a leading public cloud, providing the ability for customers to more easily harness innovation across the hybrid cloud without sacrificing production stability.”

Scott Guthrie, executive vice president, Cloud and Enterprise Group, Microsoft

“Microsoft and Red Hat are aligned in our vision to deliver simplicity, choice and flexibility to enterprise developers building cloud-native applications. Today, we’re combining both companies’ leadership in Kubernetes, hybrid cloud and enterprise operating systems to simplify the complex process of container management, with an industry-first solution on Azure.”

About Red Hat Inc.

Red Hat is the world’s leading provider of open source software solutions, using a community-powered approach to provide reliable and high-performing cloud, Linux, middleware, storage and virtualization technologies. Red Hat also offers award-winning support, training, and consulting services. As a connective hub in a global network of enterprises, partners, and open source communities, Red Hat helps create relevant, innovative technologies that liberate resources for growth and prepare customers for the future of IT. Learn more at http://www.redhat.com.

About Microsoft

Microsoft (Nasdaq “MSFT” @microsoft) enables digital transformation for the era of an intelligent cloud and an intelligent edge. Its mission is to empower every person and every organization on the planet to achieve more.

1 Gartner Inc., Smarter with Gartner, “6 Best Practices for Creating a Container Platform Strategy,” Contributor: Christy Pettey, Oct. 31, 2017

Note to editors: For more information, news and perspectives from Microsoft, please visit the Microsoft News Center at http://news.microsoft.com. Web links, telephone numbers and titles were correct at time of publication, but may have changed. For additional assistance, journalists and analysts may contact Microsoft’s Rapid Response Team or other appropriate contacts listed at http://news.microsoft.com/microsoft-public-relations-contacts.

 

 

The post Red Hat and Microsoft co-develop the first Red Hat OpenShift jointly managed service on a public cloud appeared first on Stories.