Tag Archives: Container

IT experts exchange container security tips and caveats

News
Stay informed about the latest enterprise technology news and product updates.

Real-world container security requires users to dig in to the finer points of container, host, Kubernetes and application configurations.


BOSTON — Blue-chip IT shops have established production container orchestration deployments. Now, the question…

“;
}
});

/**
* remove unnecessary class from ul
*/
$(“#inlineregform”).find( “ul” ).removeClass(“default-list”);

/**
* Replace “errorMessageInput” class with “sign-up-error-msg” class
*/
function renameErrorMsgClass() {
$(“.errorMessageInput”).each(function() {
if ($(this).hasClass(“hidden”)) {
$(this).removeClass(“errorMessageInput hidden”).addClass(“sign-up-error-msg hidden”);
} else {
$(this).removeClass(“errorMessageInput”).addClass(“sign-up-error-msg”);
}
});
}

/**
* when validation function is called, replace “errorMessageInput” with “sign-up-error-msg”
* before return
*/
function validateThis(v, form) {
var validateReturn = urValidation.validate(v, form);
renameErrorMsgClass();
return validateReturn;
}

/**
* DoC pop-up window js – included in moScripts.js which is not included in responsive page
*/
$(“#inlineRegistration”).on(“click”,”a.consentWindow”, function(e) {
window.open(this.href, “Consent”, “width=500,height=600,scrollbars=1”);
e.preventDefault();
});

is how to make them fully secure within large, multi-tenant infrastructures.

For starters, users must make changes to default settings in both Docker and Kubernetes to close potential security loopholes. For example, a default component of a container image, called docker.sock, that’s mounted without proper security tools to control its usage is vulnerable to an attack that can use it to access the host operating system and then back-end databases to exfiltrate data. Similarly, the Kubernetes API’s default setting could potentially let containers access host operating systems through a malicious pod.

“Containers also have the same problem as any other VM: They can talk to each other via an internal network,” said Jason Patterson, application security architect at NCR Corp., an Atlanta-based maker of financial transaction systems for online banking and retailers, in a presentation at the DevSecCon security conference held here this week. “That means that one misconfiguration can compromise pretty much all the containers in the environment.”

Container security configuration settings are critical

NCR uses Red Hat’s OpenShift, which restricts the Kubernetes API settings out of the box, but OpenShift users must set up security context constraints, Patterson said.

Etienne Stalmans at DevSecCon
Heroku’s Etienne Stalmans presents on container security at DevSecCon.

In general, it’s best to constrain a user’s permissions and each container’s capabilities as tightly as possible and, ideally, configure container images to whitelist only the calls and actions they’re authorized to perform — but this is still uncommon, he added.

It’s possible to limit what a container root user can do outside the container or the host on which the container runs, said Etienne Stalmans, senior security engineer at Heroku, based in San Francisco, in a separate DevSecCon presentation. To do this, container administrators can adjust settings in seccomp, an application sandboxing mechanism in the Linux kernel, and configure application permissions or capabilities.

“That still makes them a privileged user, but not outside the container,” Stalmans said. “Overall, it’s best to drop all capabilities for all container users, and then add them back in as required.”

Some highly sensitive applications require isolation provided by a hypervisor to remove any possibility that an attacker can gain host access. Vendors such as Intel, Google and Microsoft offer modified hypervisors specifically tuned for container isolation.

DevSecCon presenters also touched on tools that can be used to minimize the attack surface of container and host operating systems.

Beeline, which sells workforce management and vendor management software, uses an Oracle tool called Smith that strips out unneeded OS functions. “That shrank our Docker image sizes from as much as 65 MB to 800 KB to 2 MB,” said Jason Looney, enterprise architect at Beeline, based in Jacksonville, Fla.

Container security experts weigh host vs. API vulnerabilities

Overall, it’s best to drop all capabilities for all container users, and then add them back in as required.
Etienne Stalmanssenior security engineer, Heroku

Most of the best-known techniques in container security restrict attackers’ access to hosts and other back-end systems from a compromised container instance. But prevention of unauthorized access to APIs is critical, too, as attackers in recent high-profile attacks on AWS-based systems targeted vulnerable APIs, rather than hosts, said Sam Bisbee, chief security officer of Boston-based IT security software vendor Threat Stack, in a DevSecCon presentation.

Attackers don’t necessarily look for large amounts of data, Bisbee added. “Your security policy must cover the whole infrastructure, not just important data,” he said.

Kubernetes version 1.8 improved API security with a switch from attribute-based access control to role-based access control (RBAC). And most installers and providers of Kubernetes, including cloud container services, now have RBAC Kubernetes API access by default. But users should go further with configuration settings that prevent untrusted pods from talking to the Kubernetes API, Stalmans said.

“There is some discussion [in the Kubernetes community] to make that the default setting,” he said. It’s also possible to do this programmatically from container networking utilities, such as Calico, Istio and Weave. But “that means we’re back to firewall rules” until a new default is decided, he said.

Dig Deeper on Managing Virtual Containers

IT pros debate upstream vs. packaged Kubernetes implementations

Packaged versions of Kubernetes promise ease of use for the finicky container orchestration platform, but some enterprises will stick with a DIY approach to Kubernetes implementation.

Red Hat, Docker, Heptio, Mesosphere, Rancher, Platform9, Pivotal, Google, Microsoft, IBM and Cisco are among the many enterprise vendors seeking to cash in on the container craze with prepackaged Kubernetes implementations for private and hybrid clouds. Some of these products — such Red Hat’s OpenShift Container Platform, Docker Enterprise Edition and Rancher’s eponymous platform — offer their own distribution of the container orchestration software, and most add their own enterprise security and management features on top of upstream Kubernetes code.

However, some enterprise IT shops still prefer to download Kubernetes source code from GitHub and leave out IT vendor middlemen.

“We’re seeing a lot of companies go with [upstream] Kubernetes over Docker [Enterprise Edition] and [Red Hat] OpenShift,” said Damith Karunaratne, director of client solutions for Indellient Inc., an IT consulting firm in Oakville, Ont. “Those platforms may help with management out of the gate, but software license costs are always a consideration, and companies are confident in their technical teams’ expertise.”

The case for pure upstream Kubernetes

One such company is Rosetta Stone, which has used Docker containers in its DevOps process for years, but has yet to put a container orchestration tool into production. In August 2017, the company considered Kubernetes overkill for its applications and evaluated Docker swarm mode as a simpler approach to container orchestration.

Fast-forward a year, however, and the global education software company plans to introduce upstream Kubernetes into production due to its popularity and ubiquity as the container orchestration standard in the industry.

Concerns about Kubernetes management complexity are outdated, given how the latest versions of the tool smooth out management kinks and require less customization for enterprise security features, said Kevin Burnett, DevOps lead for Rosetta Stone in Arlington, Va.

“We’re a late adopter, but we have the benefit of more maturity in the platform,” Burnett said. “We also wanted to avoid [licensing] costs, and we already have servers. Eventually, we may embrace a cloud service like Google Kubernetes Engine more fully, but not yet.”

Burnett said his team prefers to hand-roll its own configurations of open source tools, and it doesn’t want to use features from a third-party vendor’s Kubernetes implementation that may hinder cloud portability in the future.

Other enterprise IT shops are concerned that third-party Kubernetes implementations — particularly those that rely on a vendor’s own distribution of Kubernetes, such as Red Hat’s OpenShift — will be easier to install initially, but could worsen management complexity in the long run.

“Container sprawl combined with a forked Kubernetes runtime in the hands of traditional IT ops is a management nightmare,” said a DevOps transformation leader at an insurance company who spoke on condition of anonymity, because he’s not authorized to publicly discuss the company’s product evaluation process.

His company is considering OpenShift because of an existing relationship with the vendor, but adding a new layer of orchestration and managing multiple control planes for VMs and containers would also be difficult, the DevOps leader predicted, particularly when it comes to IT ops processes such as security patching.

“Why invite that mess when you already have your hands full with a number of packaged containers that you’re going to have to develop security patching processes for?” he said.

Vendors’ Kubernetes implementations offer stability, support

Fork is a fighting word in the open source world, and most vendors say their Kubernetes implementations don’t diverge from pure Kubernetes code. And early adopters of vendors’ Kubernetes implementations said enterprise support and security features are the top priorities as they roll out container orchestration tools, rather than conformance with upstream code, per se.

Amadeus, a global travel technology company, is an early adopter of Red Hat OpenShift. As such, Dietmar Fauser, vice president of core platforms and middleware at Amadeus, said he doesn’t worry about security patching or forked Kubernetes from Red Hat. While Red Hat could theoretically choose to deviate from, or fork, upstream Kubernetes, it hasn’t done so, and Fauser said he doubts the vendor ever will.

Meanwhile, Amadeus is on the cusp of multi-cloud container portability, with instances of OpenShift on Microsoft Azure, Google and AWS public clouds in addition to its on-premises data centers. Fauser said he expects the multi-cloud deployment process will go smoothly under OpenShift.

Multi-tenancy support and a DevOps platform on top of Kubernetes were what made us want to go with third-party vendors.
Surya Suravarapuassistant vice president of product development, Change Healthcare

“Red Hat is very good at maintaining open source software distributions, patching is consistent and easy to maintain, and I trust them to maintain a portable version of Kubernetes,” Fauser said. “Some upstream Kubernetes APIs come and go, but Red Hat’s approach offers stability.”

Docker containers and Kubernetes are de facto standards that span container environments and provide portability, regardless of which vendor’s Kubernetes implementation is in place, said Surya Suravarapu, assistant vice president of product development for Change Healthcare, a healthcare information technology company in Nashville, Tenn., that spun out of McKesson in March 2017.

Suravarapu declined to specify which vendor’s container orchestration tools the company uses, but said Change Healthcare uses multiple third-party Kubernetes tools and plans to put containers into production this quarter.

“Multi-tenancy support and a DevOps platform on top of Kubernetes were what made us want to go with third-party vendors,” Suravarapu said. “The focus is on productivity improvements for our IT teams, where built-in tooling converts code to container images with the click of a button or one CLI [command-line interface] line, and compliance and security policies are available to all product teams.”

A standard way to manage containers in Kubernetes offers enough consistency between environments to improve operational efficiency, while portability between on-premises, public cloud and customer environments is a longer-term goal, Suravarapu said.

“We’re a healthcare IT company,” he added. “We can’t just go with a raw tool without 24/7 enterprise-level support.”

Still, Amadeus’s Fauser acknowledged there’s risk to trust one vendor’s Kubernetes implementation, especially when that implementation is one of the more popular market options.

“Red Hat wants to own the whole ecosystem, so there’s the danger that they could limit other companies’ access to providing plug-ins for their platform,” he said.

That hasn’t happened, but the risk exists, Fauser said.

Container security emerges in IT products enterprises know and trust

Container security has arrived from established IT vendors that enterprises know and trust, but startups that were first to market still have a lead, with support for cloud-native tech.

Managed security SaaS provider Alert Logic this week became the latest major vendor to throw its hat into the container security ring, a month after cloud security and compliance vendor Qualys added container security support to its DevSecOps tool.

Container security monitoring is now a part of Alert Logic’s Cloud Defender and Threat Manager intrusion detection systems (IDSes). Software agents deployed on each host inside a privileged container monitor network traffic between containers within that host, as well as between hosts for threats. A web application firewall blocks suspicious traffic Threat Manager finds between containers, and Threat Manager offers remediation recommendations to address any risks that remain in the infrastructure.

Accesso Technology Group bought into Alert Logic’s IDS products in January 2018 because it supports VM-based and bare-metal infrastructure, and planned container support was a bonus.

“They gave us a central location to monitor our physical data centers, remote offices and multiple public clouds,” said Will DeMar, director of information security at Accesso, a ticketing and e-commerce service provider in Lake Mary, Fla.

DeMar beta-tested the Threat Manager features and has already deployed them with production Kubernetes clusters in Google Kubernetes Engine and AWS Elastic Compute Cloud environments, though Alert Logic’s official support for its initial release is limited to AWS.

Immediate visibility into intrusion and configuration issues … [is] critical to our DevOps process.
Will DeMarDirector of information security, Accesso

“We have [AWS] CloudFormation and [HashiCorp] Terraform scripts that put Alert Logic onto every new Kubernetes host, which gives us immediate visibility into intrusion and configuration issues,” DeMar said. “It’s critical to our DevOps process.”

A centralized view of IT security in multiple environments and “one throat to choke” in a single vendor appeals to DeMar, but he hasn’t ruled out tools from Alert Logic’s startup competitors, such as Aqua Security, NeuVector and Twistlock, which he sees as complementary to Alert Logic’s product.

“Aqua and Twistlock are more container security-focused than intrusion detection-focused,” DeMar said. “They help you check the configuration on your container before you release it to the host; Alert Logic doesn’t help you there.”

Container security competition escalates

Alert Logic officials, however, do see Aqua Security, Twistlock and their ilk as competitors, and the container image scanning ability DeMar referred to is on the company’s roadmap for Threat Manager in the next nine months. Multiple layers of infrastructure are involved to secure Docker containers, and Alert Logic positions its container security approach as network-based IDS, as opposed to host-based IDS. The company said network-based IDS more deeply inspects real-time network traffic at the packet level, whereas startups’ products examine only where that network traffic goes between hosts.

lert Logic Threat Manager UI
Alert Logic’s Threat Manager offers container security remediation recommendations.

Aqua Security co-founder and CTO Amir Jerbi, of course, sees things differently.

“Traditional security tools are trying to shift into containers and still talk in traditional terms about the host and network,” Jerbi said. “Container security companies like ours don’t distinguish between network, host and other levels of access — we protect the container, through a mesh of multiple disciplines.”

That’s the major distinction for enterprise end users: whether they prefer container security baked into broader, traditional products or as the sole focus of their vendor’s expertise. Aqua Security version 3.2, also released this week, added support for container host monitoring where thin OSes are used, but the tool isn’t a good fit in VM or bare-metal environments where containers aren’t present, Jerbi said.

Aqua Security’s tighter focus means it has a head start on the latest and greatest container security features. For example, version 3.2 includes the ability to customize and build a whitelist of system calls containers make, which is still on the roadmap for Alert Logic. Version 3.2 also adds support for static AWS Lambda function monitoring, with real-time Lambda security monitoring already on the docket. Aqua Security was AWS’ partner for container security with Fargate, while Alert Logic must still catch up there as well.

Industry watchers expect this dynamic to continue for the rest of 2018 and predict that incumbent vendors will snap up startups in an effort to get ahead of the curve.

“Everyone sees the same hill now, but they approach it from different viewpoints, more aligned with developers or more aligned with IT operations,” said Fernando Montenegro, analyst with 451 Research. “As the battle lines become better defined, consolidation among vendors is still a possibility, to strengthen the operations approach where vendors are already focused on developers and vice versa.”

Insider preview: Windows container image

Earlier this year at Microsoft Build 2018, we announced a third container base image for applications that have additional API dependencies beyond nano server and Windows Server Core. Now the time has finally come and the Windows container image is available for Windows Insiders.

Why another container image?

In conversations with IT Pros and developers there were some themes coming up which went beyond the nanoserver and windowsservercore container images:
Quite a few customers were interested in moving their legacy applications into containers to benefit from container orchestration and management technologies like Kubernetes. However, not all applications could be easily containerized, in some cases due to missing components like proofing support which is not included in Windows Server Core.
Others wanted to leverage containers to run automated UI tests as part of their CI/CD processes or use other graphics capabilities like DirectX which are not available within the other container images.

With the new windows container image, we’re now offering a third option to choose from based on the requirements of the workload. We’re looking forward to see what you will build!

How can you get it?

If you are running a container host on Windows Insider build 17704, you can get this container image using the following command:

docker pull mcr.microsoft.com/windows-insider:10.0.17704.1000

To simply get the latest available version of the container image, you can use the following command:

docker pull mcr.microsoft.com/windows-insider:latest

Please note that for compatibility reasons we recommend running the same build version for the container host and the container itself.

Since this image is currently part of the Windows Insider preview, we’re looking forward to your feedback, bug reports, and comments. We will be publishing newer builds of this container image along with the insider builds.

Alles Gute,
Lars

Container orchestration systems at risk by being web-accessible

Researchers found more than 21,000 container orchestration systems are at risk simply because they are accessible via the web.

Security researchers from Lacework, a cloud security vendor based in Mountain View, Calif., searched for popular container orchestration systems, like Kubernetes, Docker Swarm, Mesosphere and OpenShift, and they found tens of thousands of administrator dashboards were accessible on the internet. According to Lacework’s report, this exposure alone could leave organizations at risk because of the “potential for attack points caused by poorly configured resources, lack of credentials and the use of nonsecure protocols.”

“There are typically two critical pieces to managing these systems. First is a web UI and associated APIs. Secondly, an administrator dashboard and API are popular because they allow users to essentially run all aspects of a container cluster from a single interface,” Lacework’s researchers wrote in its report. “Access to the dashboard gives you top-level access to all aspects of administration for the cluster it is assigned to manage, [including] managing applications, containers, starting workloads, adding and modifying applications, and setting key security controls.”

Dan Hubbard, chief security architect at Lacework, said these cloud container orchestration systems represent a significant change from traditional security.

“In the old data center days, it was easy to set policy around who could access admin consoles, as you would simply limit it to your corporate network and trusted areas. The cloud, combined with our need to work from anywhere, changes this dramatically, and there are certainly use cases to allow remote administration over the internet,” Hubbard said via email. “That said, it should be done in a secure way. Extra security measures like multifactor authentication, enforced SSL, [role-based access controls], a proxy in front of the server to limit access or a ‘jump server’ are all ways to do this. This is something that security needs to be aware of.”

Lacework reported that more than 300 of the exposed container orchestration systems’ dashboards did not have credentials implemented to limit access, and “38 servers running healthz [web application health and security checker] live on the Internet with no authentication whatsoever were discovered.”

Hubbard added that “these sites had security weaknesses that could have enabled hackers to either attack directly these nodes or provide hackers with information that would allow them to attack more easily the company owning these nodes.” 

However, despite warning of potential risks to these container orchestration systems, Hubbard and Lacework could not expand on specific threats facing any of the nearly 22,000 accessible dashboards described in the report.

“Technically, they are all connected to the internet and their ports are open, so attackers can gain privileged access or discover information about the target,” Hubbard said. “With respect to flaws, we did not perform any password cracking or dictionary attacks against the machines or vulnerability scans. However, we did notice that a lot of the machines had other services open besides the container orchestration, and that certainly increases the attack surface.”

Red Hat and Microsoft co-develop the first Red Hat OpenShift jointly managed service on a public cloud

Microsoft and Red Hat expand partnership around hybrid cloud, container management and developer productivity

SAN FRANCISCO — May 8, 2018 — Microsoft Corp. (Nasdaq “MSFT”) and Red Hat Inc. (NYSE: RHT) on Tuesday expanded their alliance to empower enterprise developers to run container-based applications across Microsoft Azure and on-premises. With this collaboration, the companies will introduce the first jointly managed OpenShift offering in the public cloud, combining the power of Red Hat OpenShift, the industry’s most comprehensive enterprise Kubernetes platform, and Azure, Microsoft’s public cloud.

“Gartner predicts that, by 2020, more than 50% of global organizations will be running containerized applications in production, up from less than 20% today.”1

With organizations turning to containerized applications and Kubernetes to drive digital transformation and help address customer, competitive and market demands, they need solutions to easily orchestrate and manage these applications, across the public cloud and on-premises. Red Hat OpenShift on Azure will be jointly engineered and designed to reduce the complexity of container management for customers. As the companies’ preferred offering for hybrid container workflows for our joint customers, Red Hat and Microsoft will jointly manage the solution for customers, with support from both companies.

In addition to being a fully managed service, Red Hat OpenShift on Azure will bring enterprise developers:

  • Flexibility: Freely move applications between on-premises environments and Azure using OpenShift, which offers a consistent container platform across the hybrid cloud.
  • Speed: Connect faster, and with enhanced security, between Azure and on-premises OpenShift clusters with hybrid networking.
  • Productivity: Access Azure services like Azure Cosmos DB, Azure Machine Learning and Azure SQL DB, making developers more productive.

When customers choose Red Hat OpenShift on Azure, they will receive a managed service backed by operations and support services from both companies. Support extends across their containerized applications, operating systems, infrastructure and the orchestrator. Further, Red Hat’s and Microsoft’s sales organizations will work together to bring the companies’ extensive technology platforms to customers, equipping them to build more cloud-native applications and modernize existing applications.

Customers can more easily move their applications between on-premises environments and Microsoft Azure because they are leveraging a consistent container platform in OpenShift across both footprints of the hybrid cloud.

The expanded collaboration between Microsoft and Red Hat will also include:

  • Enabling the hybrid cloud with full support for Red Hat OpenShift Container Platform on-premises and on Microsoft Azure Stack, enabling a consistent on- and off-premises foundation for the development, deployment and management of cloud-native applications on Microsoft infrastructure. This provides a pathway for customers to pair the power of the Azure public cloud with the flexibility and control of OpenShift on-premises on Azure Stack.
  • Multiarchitecture container management that spans both Windows Server and Red Hat Enterprise Linux containers. Red Hat OpenShift on Microsoft Azure will consistently support Windows containers alongside Red Hat Enterprise Linux containers, offering a uniform orchestration platform that spans the leading enterprise platform providers.
  • More ways to harness data with expanded integration of Microsoft SQL Server across the Red Hat OpenShift landscape. This will soon include SQL Server as a Red Hat certified container for deployment on Red Hat OpenShift on Azure and Red Hat OpenShift Container Platform across the hybrid cloud, including Azure Stack.
  • More ways for developers to use Microsoft tools with Red Hat as Visual Studio Enterprise and Visual Studio Professional subscribers will get Red Hat Enterprise Linux credits. For the first time, developers can work with .NET, Java, or the most popular open source frameworks on this single, and supported, platform.

Availability

Red Hat OpenShift on Azure is anticipated to be available in preview in the coming months. Red Hat OpenShift Container Platform and Red Hat Enterprise Linux on Azure and Azure Stack are currently available.

Supporting Quotes

Paul Cormier, president, Products and Technologies, Red Hat

“Very few organizations are able to fully silo their IT operations into a solely on-premises or public cloud footprint; instead, it’s a hybrid mixture of these environments that presents a path toward digital transformation. By extending our partnership with Microsoft, we’re able to offer the industry’s most comprehensive Kubernetes platform on a leading public cloud, providing the ability for customers to more easily harness innovation across the hybrid cloud without sacrificing production stability.”

Scott Guthrie, executive vice president, Cloud and Enterprise Group, Microsoft

“Microsoft and Red Hat are aligned in our vision to deliver simplicity, choice and flexibility to enterprise developers building cloud-native applications. Today, we’re combining both companies’ leadership in Kubernetes, hybrid cloud and enterprise operating systems to simplify the complex process of container management, with an industry-first solution on Azure.”

About Red Hat Inc.

Red Hat is the world’s leading provider of open source software solutions, using a community-powered approach to provide reliable and high-performing cloud, Linux, middleware, storage and virtualization technologies. Red Hat also offers award-winning support, training, and consulting services. As a connective hub in a global network of enterprises, partners, and open source communities, Red Hat helps create relevant, innovative technologies that liberate resources for growth and prepare customers for the future of IT. Learn more at http://www.redhat.com.

About Microsoft

Microsoft (Nasdaq “MSFT” @microsoft) enables digital transformation for the era of an intelligent cloud and an intelligent edge. Its mission is to empower every person and every organization on the planet to achieve more.

1 Gartner Inc., Smarter with Gartner, “6 Best Practices for Creating a Container Platform Strategy,” Contributor: Christy Pettey, Oct. 31, 2017

Note to editors: For more information, news and perspectives from Microsoft, please visit the Microsoft News Center at http://news.microsoft.com. Web links, telephone numbers and titles were correct at time of publication, but may have changed. For additional assistance, journalists and analysts may contact Microsoft’s Rapid Response Team or other appropriate contacts listed at http://news.microsoft.com/microsoft-public-relations-contacts.

 

 

The post Red Hat and Microsoft co-develop the first Red Hat OpenShift jointly managed service on a public cloud appeared first on Stories.

Container infrastructure a silver lining amid Intel CPU flaw fixes

Container infrastructure can help IT pros deploy updates as they fortify their systems against Meltdown and Spectre CPU vulnerabilities.

Sys admins everywhere must patch operating systems to reduce the effects of the recently discovered Intel CPU flaws, which hackers could exploit to access speculative execution data in virtual memory and, potentially, to other VMs that share the same host or root access.

However, those who run container infrastructures estimate a milder impact of this additional work than the undertaking for those who must patch VM-based infrastructures, especially manually, to combat Meltdown and Spectre.

“Most of the fixes out so far are kernel patches, and since containers share the kernel, there are fewer kernels to patch,” said Nuno Pereira, CTO of IJet International, a risk management company in Annapolis, Md.

VMware has pledged to issue fixes at the hypervisor level, and cloud providers such as Google and Amazon say they’ve patched their VMs, but it’s wise to patch the kernels, as well, Pereira said.

Security best practices dictate containers run with least-privilege access to the underlying operating system and host. That could limit the blast radius should a hacker use the Meltdown and Spectre vulnerabilities to gain access to a container. But experts emphasize that container infrastructure isn’t guaranteed immunity to the vulnerabilities, as container-level segmentation alone doesn’t fully defend against attacks.

“No one should expect that just a container layer will mitigate the issue,” said Fernando Montenegro, an analyst with 451 Research. “This issue highlights that security assumptions we’ve made in the past have to be revisited.”

Ultimately, Intel and other chipmakers, such as AMD, will have to issue hardware- or firmware-level fixes to eliminate the Meltdown and Spectre vulnerabilities. It’s not clear what those will be yet, but enterprises with container orchestration in place will have a leg up, as they accommodate those widespread changes.

“Most folks running containers have something like [Apache] Mesos or Kubernetes, and that makes it easy to do rolling upgrades on the infrastructure underneath,” said Andy Domeier, director of technology operations at SPS Commerce, a communications network for supply chain and logistics businesses based in Minneapolis. SPS uses Mesos for container orchestration, but it is evaluating Kubernetes, as well.

Containers are often used with immutable infrastructures, which can be stood up and torn down at will and present an ideal means to handle the infrastructure changes on the way, due to these specific Intel CPU flaws or unforeseen future events.

“It really hammers home the case for immutability,” said Carmen DeArdo, technology director responsible for the software delivery pipeline at Nationwide Mutual Insurance Co. in Columbus, Ohio.

Meltdown and Spectre loom over containers
Container infrastructure can help ease the pain of Meltdown and Spectre vulnerabilities.

DevOps performance concerns

No one should expect that just a container layer will mitigate the issue … security assumptions we’ve made in the past have to be revisited.
Fernando Montenegroanalyst, 451 Research

Infrastructure automation will help, but these vulnerabilities arose from CPU technology that drastically improved performance, with more efficient memory caching and pre-fetching. This means patches and infrastructure updates to mitigate security risks can slow down system performance.

PostgreSQL benchmark tests in worst-case-scenario situations show OS patches alone may degrade performance by 17% to 23%. Red Hat put out an advisory to customers stating its patches to the Red Hat Enterprise Linux kernel may reduce performance by 8% to 19% on highly cached random memory.

“For Spectre, my understanding is that you need code changes and/or recompilation of userspace programs themselves to [fully] resolve it, so it is likely to be a long slog,” said Michael Bishop, CTO at Alpha Vertex, a New York-based fintech startup.

No one knows how future hardware fixes will affect CPU performance, which raises concerns for large enterprises that have grown accustomed to quick system builds in a DevOps continuous integration and delivery process. Reports have started to emerge that the performance change will affect the time it takes to compile programs, which is of particular concern to developers who want to make quick, frequent updates to apps.

“I remember when build jobs would run for hours, and we could go back to a developer mindset of, ‘Get things perfect,’ if feedback loops start to take too long,” Nationwide’s DeArdo said. “Eventually, that would impact lead time and productivity.”

Beth Pariseau is senior news writer for TechTarget’s Data Center and Virtualization Media Group. Write to her at bpariseau@techtarget.com or follow @PariseauTT on Twitter.

Windows Server hardening still weighs heavily on admins

In these heady times of software-defined technologies and container virtualization, many IT professionals continue to grapple with an issue that has persisted since the advent of the server: security.

Ever since businesses discovered the advantages of sharing resources in a client-server arrangement, there have also been intruders attempting to bypass the protections at the perimeter of the network. These attackers angle for any weak point — outdated protocols, known vulnerabilities in unpatched systems — or go the direct route and deliver a phishing email in the hopes that a user will click on a link to unleash a malicious payload onto the network.

Windows Server hardening remains top of mind for most admins. Just as there are many ways to infiltrate a system, there are multiple ways to blunt those attacks. The following compilation highlights the most-viewed tutorials on SearchWindowsServer in 2017, several of which addressed the ways IT can reduce exposure to a server-based attack.

5. Manage Linux servers with a Windows admin’s toolkit

While not every Windows administrator is comfortable away from the familiarity of point-and-click GUI management tools, more in IT are taking cues from the world of DevOps to implement automation routines.

It took a while, but Microsoft eventually realized that spurning Linux also steered away potential customers. About 40% of the workloads on the Azure platform run some variation of Linux, Microsoft is a Platinum member of the Linux Foundation, and the company released SQL Server for Linux in September.

Many Windows shops now have a sprinkling of servers that use the open source operating system, and those administrators must figure out the best way to manage and monitor those Linux workloads. The cross-platform PowerShell Core management and automation tool promises to address this need, but until the offering reaches full maturity, this tip provides several options to help address the heterogeneous nature of many environments.

4. Disable SMB v1 for further Windows Server hardening

Unpatched Windows systems are tempting targets for ransomware and the latest malware du jour, Bitcoin miners.

A layered security approach helps, but it’s even better to pull out threat enablers by the roots to blunt future attacks. Long before the spate of cyberattacks in early 2017 that hinged on an exploit in Server Message Block (SMB) v1 that locked up thousands of Windows machines around the world, administrators had been warned to disable the outdated protocol. This tip details the techniques to search for signs of SMB v1 and how to extinguish it from the data center.

3. Microsoft LAPS puts a lock on local admin passwords

For the sake of convenience, many Windows shops will use the same administrator password on each machine. While this practice helps administrators with the troubleshooting or configuration process, it’s also tremendously insecure. If that credential falls into the wrong hands, an intruder can roam through the network until they obtain ultimate system access — domain administrator privileges. Microsoft introduced its Local Administrator Password Solution (LAPS) in 2015 to help Windows Server hardening efforts. This explainer details the underpinnings of LAPS and how to tune it for your organization’s needs.

2. Chocolatey sweetens software installations on servers

While not every Windows administrator is comfortable away from the familiarity of point-and-click GUI management tools, more in IT are taking cues from the world of DevOps to implement automation routines. Microsoft offers a number of tools to install applications, but a package manager helps streamline this process through automated routines that pull in the right version of the software and make upgrades less of a chore. This tip walks administrators through the features of the Chocolatey package manager, ways to automate software installations and how an enterprise with special requirements can develop a more secure deployment method.

1. Reduce risks through managed service accounts

Most organizations employ service accounts for enterprise-grade applications such as Exchange Server or SQL Server. These accounts provide the necessary elevated authorizations needed to run the program’s services. To avoid downtime, quite often administrators either do not set an expiration date on a service account password or will use the same password for each service account. Needless to say, this procedure makes less work for an industrious intruder to compromise a business. A managed service account automatically generates new passwords to remove the need for administrative intervention. This tip explains how to use this feature to lock down these accounts as part of IT’s overall Windows Server hardening efforts.

Container security platforms diverge on DevSecOps approach

SAN FRANCISCO — Container security platforms have begun to proliferate, but enterprises may have to watch the DevSecOps trend play out before they settle on a tool to secure container workloads.

Two container security platforms released this month — one by an up-and-coming startup and another by an established enterprise security vendor — take different approaches. NeuVector, a startup that introduced an enterprise edition at DevOps Enterprise Summit 2017, supports code and container-scanning features that integrate into continuous integration and continuous delivery (CI/CD) pipelines, but its implementation requires no changes to developers’ workflow.

By contrast, a product from the more established security software vendor CSPi, Aria Software Defined Security, allows developers to control the insertion of libraries into container and VM images that enforce security policies.

There’s still significant overlap between these container security platforms. NeuVector has CSPi’s enterprise customer base in its sights, with added support for noncontainer workloads and Lightweight Directory Access Protocol. Software-defined security includes network microsegmentation features for policy enforcement that are NeuVector’s primary focus. And while developers inject software-defined security code into machine images, they aren’t expected to become security experts. Enterprise IT security pros set the policies enforced by software-defined security, and a series of wizards guide developers through the integration process for software-defined security libraries.

Both vendors also agree on this: Modern IT infrastructures with DevOps pipelines that deliver rapid application changes require a fundamentally different approach to security than traditional vulnerability detection and patching techniques.

There’s definitely a need for new security techniques for containers that rely less on layers of VM infrastructure to enforce network boundaries, which can negate some of the gains to be had from containerization, said Jay Lyman, analyst with 451 Research.

However, even amid lots of talk about the need to “shift left” and get developers involved with IT security practices, bringing developers and security staff together at most organizations is still much easier said than done, Lyman said.

NeuVector 1.3 container security platform
NeuVector 1.3 captures network sessions automatically when container threats are detected, a key feature for enterprises.

Container security platforms encounter DevSecOps growing pains

As NeuVector and CSPi product updates hit the market, enterprise IT pros at the DevOps Enterprise Summit (DOES) here this week said few enterprises use containers at this point, and the container security discussion is even further off their radar. By the time containers are widely used, DevSecOps may be more mature, which could favor CSPi’s more hands-on developer strategy. But for now, developers and IT security remain sharply divided.

Eventually, we’ll see more developer involvement in security, but it will take time and probably be pretty painful.
Jay Lymananalyst, 451 Research

“Everyone needs to be security-conscious, but to demand developers learn security and integrate it into their own workflow, I don’t see how that happens,” said Joan Qafoku, a risk consulting associate at KPMG LLP in Seattle who works with an IT team at a large enterprise client also based in Seattle. That client, which Qafoku did not name, gives developers a security-focused questionnaire, but security integration into their process goes no further than that.

NeuVector’s ability to integrate into the CI/CD pipeline without changes to application code or the developer workflow was a selling point for Tobias Gurtzick, security architect for Arvato, an international outsourcing services company based in Gütersloh, Germany.

Still, this integration wasn’t perfect in earlier iterations of NeuVector’s product, Gurtzick said in an interview before DOES. Gurtzick’s team polled an API every two minutes to trigger container and code scans with previous versions. NeuVector’s 1.3 release includes a new webhooks notification feature that more elegantly triggers code scans as part of continuous integration testing, without the performance overhead of polling the API.

“That’s the most important feature of the new version,” Gurtzick said. He also pointed to added support for detailed network session snapshots that can be used in forensic analysis. Software-defined security offers a similar feature with its first release.

While early adopters of container security platforms, such as Gurtzick, have settled the debate about how developers and IT security should bake security into applications, the overall market has been slower to take shape as enterprises hash out that collaboration, Lyman said.

“Earlier injection of security into the development process is better, but that still usually falls to IT ops and security [staff],” Lyman said. “Part of the DevOps challenge is aligning those responsibilities with application development. Eventually, we’ll see more developer involvement in security, but it will take time and probably be pretty painful.”

Beth Pariseau is senior news writer for TechTarget’s Data Center and Virtualization Media Group. Write to her at bpariseau@techtarget.com or follow @PariseauTT on Twitter.