Tag Archives: Containers

McAfee launches security tool Mvision Cloud for Containers

Cybersecurity company McAfee on Tuesday announced McAfee Mvision Cloud for Containers, a product intended to help organizations ensure security and compliance of their cloud container workloads.

Mvision Cloud for Containers integrates container security with McAfee’s cloud access security broker (CASB) and cloud security posture management (CSPM) tools, according to the company.

“Data could … move between SaaS offerings, IaaS custom apps in various CPSs, containers and hybrid clouds. We want security to be consistent and predictable across the places data live and workloads are processed. Integrating CASB and CSPM allows McAfee to provide consistent configuration policies and DLP/malware scanning that does not restrict the flexibility of the cloud,” said John Dodds, a director of product management at McAfee.

According to Andras Cser, vice president and principal analyst for security and risk management at Forrester, when it comes to evaluating a product like Mvision, it’s worth looking at factors such as “price, cost of integration, level of integration between acquired components and coverage of the client’s applications.”

Mvision Cloud uses the zero-trust model application visibility and control capabilities by container security startup NanoSec for container-based deployments in the cloud. McAfee acquired NanoSec in September in a move to expand its container cloud security offerings.

Mvision Cloud for Containers builds on the existing McAfee Mvision Cloud platform, integrating cloud security posture management and vulnerability scanning for container workloads so that security policies can be implemented across different forms of cloud IaaS workloads, according to the company.

Other features of McAfee Mvision Cloud for Containers include:

  • Cloud security posture management: Ensures the container platforms run in accordance with Center for Internet Security and other compliance standards by integrating configuration audit checks to container workloads.
  • Container images vulnerability scanning: Identifies weak or exploitable elements in container images to reduce the application’s risk profile.
  • DevOps integration: Ensures compliance and secures container workloads; executes security audits and vulnerability scanning to identify risk and send security incidents and feedback to developers within the build process; and monitors and prevents configuration drift on production deployments of the container workloads.

Go to Original Article
Author:

VMware’s Bitnami acquisition grows its development portfolio

The rise of containers and the cloud has changed the face of the IT market, and VMware must evolve with it. The vendor has moved out of its traditional data center niche and — with its purchase of software packager Bitnami — has made a push into the development community, a change that presents new challenges and potential. 

Historically, VMware delivered a suite of system infrastructure management tools. With the advent of cloud and digital disruption, IT departments’ focus expanded from monitoring systems to developing applications. VMware has extended its management suite to accommodate this shift, and its acquisition of Bitnami adds new tools that ease application development.

Building applications presents difficulties for many organizations. Developers spend much of their time on application plumbing, writing software that performs mundane tasks — such as storage allocation — and linking one API to another.

Bitnami sought to simplify that work. The company created prepackaged components called installers that automate the development process. Rather than write the code themselves, developers can now download Bitnami system images and plug them into their programs. As VMware delves further into hybrid cloud market territory, Bitnami brings simplified app development to the table.

Torsten Volk, managing research director at Enterprise Management AssociatesTorsten Volk

“Bitnami’s solutions were ahead of their time,” said Torsten Volk, managing research director at Enterprise Management Associates (EMA), a computer consultant based out of Portsmouth, New Hampshire. “They enable developers to bulletproof application development infrastructure in a self-service manner.”

The value Bitnami adds to VMware

Released under the Apache License, Bitnami’s modules contain commonly coupled software applications instead of just bare-bones images. For example, a Bitnami WordPress stack might contain WordPress, a database management system (e.g., MySQL) and a web server (e.g., Apache).

Bitnami takes care of several mundane programming chores. Its keeps all components up-to-date — so if it finds a security problem, it patches that problem — and updates those components’ associated libraries. Bitnami makes its modules available through its Application Catalogue, which functions like an app store.

The company designed its products to run on a wide variety of systems. Bitnami supports Apple OS X, Microsoft Windows and Linux OSes. Its VM features work with VMware ESX and ESXi, VirtualBox and QEMU. Bitnami stacks also are compatible with software infrastructures such as WAMP, MAMP, LAMP, Node.js, Tomcat and Ruby. It supports cloud tools from AWS, Azure, Google Cloud Platform and Oracle Cloud. The installers, too, feature a wide variety of platforms, including Abante Cart, Magento, MediaWiki, PrestaShop, Redmine and WordPress. 

Bitnami seeks to help companies build applications once and run them on many different configurations.

“For enterprise IT, we intend to solve for challenges related to taking a core set of application packages and making them available consistently across teams and clouds,” said Milin Desai, general manager of cloud services at VMware.

Development teams share project work among individuals, work with code from private or public repositories and deploy applications on private, hybrid and public clouds. As such, Bitnami’s flexibility made it appealing to developers — and VMware.

How Bitnami and VMware fit together

[VMware] did not pay a premium for the products, which were not generating a lot of revenue. Instead, they wanted the executives, who are all rock stars in the development community.
Torsten VolkManaging Research Director, EMA

VMware wants to extend its reach from legacy, back-end data centers and appeal to more front-end and cloud developers.

“In the last few years, VMware has gone all in on trying to build out a portfolio of management solutions for application developers,” Volk said. VMware embraced Kubernetes and has acquired container startups such as Heptio to prove it.

Bitnami adds another piece to this puzzle, one that provides a curated marketplace for VMware customers who hope to emphasize rapid application development.

“Bitnami’s application packaging capabilities will help our customers to simplify the consumption of applications in hybrid cloud environments, from on-premises to VMware Cloud on AWS to VMware Cloud Provider Program partner clouds, once the deal closes,” Desai said.

Facing new challenges in a new market

However, the purchase moves VMware out of its traditional virtualized enterprise data center sweet spot. VMware has little name recognition among developers, so the company must build its brand.

“Buying companies like Bitnami and Heptio is an attempt by VMware to gain instant credibility among developers,” Volk said. “They did not pay a premium for the products, which were not generating a lot of revenue. Instead, they wanted the executives, who are all rock stars in the development community.”  

Supporting a new breed of customer poses its challenges. Although VMware’s Bitnami acquisition adds to its application development suite — an area of increasing importance — it also places new hurdles in front of the vendor. Merging the culture of a startup with that of an established supplier isn’t always a smooth process. In addition, VMware has bought several startups recently, so consolidating its variety of entities in a cohesive manner presents a major undertaking.

Go to Original Article
Author:

Get to know data storage containers and their terminology

Data storage containers have become a popular way to create and package applications for better portability and simplicity. Seen by some analysts as the technology to unseat virtual machines, containers have steadily gained more attention as of late, from customers and vendors alike.

Why choose containers and containerization over the alternatives? Containers work on bare-metal systems, cloud instances and VMs, and across Linux and select Windows and Mac OSes. Containers typically use fewer resources than VMs and can bind together application libraries and dependencies into one convenient, deployable unit.

Below, you’ll find key terms about containers, from technical details to specific products on the market. If you’re looking to invest in containerization, you’ll need to know these terms and concepts.

Getting technical

Containerization. With its roots in partitioning, containerization is an efficient data storage strategy that virtually isolates applications, enabling multiple containers to run on one machine but share the same OS. Containers run independent processes in a shared user space and are capable of running on different environments, which makes them a flexible alternative to virtual machines.

The benefits of containerization include reduced overhead on hardware and portability, while concerns include the security of data stored on containers. With all of the containers running under one OS, if one container is vulnerable, the others are as well.

Container management software. As the name indicates, container management software is used to simplify, organize and manage containers. Container management software automates container creation, destruction, deployment and scaling and is particularly helpful in situations with large numbers of containers on one OS. However, the orchestration aspect of management software is complex and setup can be difficult.

Products in this area include Kubernetes, an open source container orchestration software; Apache Mesos, an open source project that manages compute clusters; and Docker Swarm, a container cluster management tool.

Persistent storage. In order to be persistent, a storage device must retain data after being shut off. While persistence is essentially a given when it comes to modern storage, the rise of containerization has brought persistent storage back to the forefront.

Containers did not always support persistent storage, which meant that data created with a containerized app would disappear when the container was destroyed. Luckily, storage vendors have made enough advances in container technology to solve this issue and retain data created on containers.

Stateful app. A stateful app saves client data from the activities of one session for use in the next session. Most applications and OSes are stateful, but because stateful apps didn’t scale well in early cloud architectures, developers began to build more stateless apps.

With a stateless app, each session is carried out as if it was the first time, and responses aren’t dependent upon data from a previous session. Stateless apps are better suited to cloud computing, in that they can be more easily redeployed in the event of a failure and scaled out to accommodate changes.

However, containerization allows files to be pulled into the container during startup and persist somewhere else when containers stop and start. This negates the issue of stateful apps becoming unstable when introduced to a stateless cloud environment.

Container vendors and products

While there is one vendor undoubtedly ahead of the pack when it comes to modern data storage containers, the field has opened up to include some big names. Below, we cover just a few of the vendors and products in the container space.

Docker. Probably the most synonymous with data storage containers, Docker is even credited with bringing about the container renaissance in the IT space. Docker’s platform is open source, which enables users to register and share containers over various hosts in both private and public environments. In recent years, Docker made containers accessible and offers various editions of containerization technology.

When you refer to Docker, you likely mean either the company itself, Docker Inc., or the Docker Engine. Initially developed for Linux systems, the Docker Engine had version updates extended to operate natively on both Windows and Apple OSes. The Docker Engine supports tasks and workflows involved in building, shipping and running container-based applications.

Container Linux. Originally referred to as CoreOS Linux, Container Linux by CoreOS is an open source OS that deploys and manages the applications within containers. Container Linux is based on the Linux kernel and is designed for massive scale and minimal overhead. Although, Container Linux is open source, CoreOS sells support for the OS. Acquired by Red Hat in 2018, CoreOS develops open source tools and components.

Azure Container Instances (ACI). With ACI, developers can deploy data storage containers on the Microsoft Azure cloud. Organizations can spin up a new container via the Azure portal or command-line interface, and Microsoft automatically provisions and scales the underlying compute resources. ACI also supports standard Docker images and Linux and Windows containers.

Microsoft Windows containers. Windows containers are abstracted and portable operating environments supported by the Microsoft Windows Server 2016 OS. They can be managed with Docker and PowerShell and support established Windows technologies. Along with Windows Containers, Windows Server 2016 also supports Hyper-V containers.

VMware vSphere Integrated Containers (VIC). While VIC can refer to individual container instances, it is also a platform that deploys and manages containers within VMs from within VMware’s vSphere VM management software. Previewed under the name Project Bonneville, VMware’s play on containers comes with the virtual container host, which represents tools and hardware resources that create and control container services.

Go to Original Article
Author:

Bringing Device Support to Windows Server Containers

When we introduced containers to Windows with the release of Windows Server 2016, our primary goal was to support traditional server-oriented applications and workloads. As time has gone on, we’ve heard feedback from our users about how certain workloads need access to peripheral devices—a problem when you try to wrap those workloads in a container. We’re introducing support for select host device access from Windows Server containers, beginning in Insider Build 17735 (see table below).

We’ve contributed these changes back to the Open Containers Initiative (OCI) specification for Windows. We will be submitting changes to Docker to enable this functionality soon. Watch the video below for a simple example of this work in action (hint: maximize the video).

What’s Happening

To provide a simple demonstration of the workflow, we have a simple client application that listens on a COM port and reports incoming integer values (powershell console on the right). We did not have any devices on hand to speak over physical COM, so we ran the application inside of a VM and assigned the VM’s virtual COM port to the container. To mimic a COM device, an application was created to generate random integer values and send it over a named pipe to the VM’s virtual COM port (this is the powershell console on the left).

As we see in the video at the beginning, if we do not assign COM ports to our container, when the application runs in the container and tries to open a handle to the COM port, it fails with an IOException (because as far as the container knew, the COM port didn’t exist!). On our second run of the container, we assign the COM port to the container and the application successfully gets and prints out the incoming random ints generated by our app running on the host.

How It Works

Let’s look at how it will work in Docker. From a shell, a user will type:

docker run --device="/"

For example, if you wanted to pass a COM port to your container:

docker run --device="class/86E0D1E0-8089-11D0-9CE4-08003E301F73" mcr.microsoft.com/windowsservercore-insider:latest

The value we’re passing to the device argument is simple: it looks for an IdType and an Id. For this coming release of Windows , we only support an IdType of “class”. For Id, this is  a device interface class GUID. The values are delimited by a slash, “/”.  Whereas  in Linux a user assigns individual devices by specifying a file path in the “/dev/” namespace, in Windows we’re adding support for a user to specify an interface class, and all devices which identify as implementing this class   will be plumbed into the container.

If a user wants to specify multiple classes to assign to a container:

docker run --device="class/86E0D1E0-8089-11D0-9CE4-08003E301F73" --device="class/DCDE6AF9-6610-4285-828F-CAAF78C424CC" --device="…" mcr.microsoft.com/windowsservercore-insider:latest

What are the Limitations?

Process isolation only: We only support passing devices to containers running in process isolation; Hyper-V isolation is not supported, nor do we support host device access for Linux Containers on Windows (LCOW).

We support a distinct list of devices: In this release, we targeted enabling a specific set of features and a specific set of host device classes. We’re starting with simple buses. The complete list that we currently support  is  below.

Device Type Interface Class  GUID
GPIO 916EF1CB-8426-468D-A6F7-9AE8076881B3
I2C Bus A11EE3C6-8421-4202-A3E7-B91FF90188E4
COM Port 86E0D1E0-8089-11D0-9CE4-08003E301F73
SPI Bus DCDE6AF9-6610-4285-828F-CAAF78C424CC

Stay tuned for a Part 2 of this blog that explores the architectural decisions we chose to make in Windows to add this support.

What’s Next?

We’re eager to get your feedback. What specific devices are most interesting for you and what workload would you hope to accomplish with them? Are there other ways you’d like to be able to access devices in containers? Leave a comment below or feel free to tweet at me.

Cheers,

Craig Wilhite (@CraigWilhite)

Manage Hyper-V containers and VMs with these best practices

Containers and VMs should be treated as the separate instance types they are, but there are specific management strategies that work for both that admins should incorporate.


Containers and VMs are best suited to different workload types, so it makes sense that IT administrators would…

“;
}
});

/**
* remove unnecessary class from ul
*/
$(“#inlineregform”).find( “ul” ).removeClass(“default-list”);

/**
* Replace “errorMessageInput” class with “sign-up-error-msg” class
*/
function renameErrorMsgClass() {
$(“.errorMessageInput”).each(function() {
if ($(this).hasClass(“hidden”)) {
$(this).removeClass(“errorMessageInput hidden”).addClass(“sign-up-error-msg hidden”);
} else {
$(this).removeClass(“errorMessageInput”).addClass(“sign-up-error-msg”);
}
});
}

/**
* when validation function is called, replace “errorMessageInput” with “sign-up-error-msg”
* before return
*/
function validateThis(v, form) {
var validateReturn = urValidation.validate(v, form);
renameErrorMsgClass();
return validateReturn;
}

/**
* DoC pop-up window js – included in moScripts.js which is not included in responsive page
*/
$(“#inlineRegistration”).on(“click”,”a.consentWindow”, function(e) {
window.open(this.href, “Consent”, “width=500,height=600,scrollbars=1”);
e.preventDefault();
});

use both in their virtual environments, but that adds another layer of complexity to consider.

One of the most notable features introduced in Windows Server 2016 was support for containers. At the time, it seemed that the world was rapidly transitioning away from VMs in favor of containers, so Microsoft had little choice but to add container support to its flagship OS.

Today, organizations use both containers and VMs. But for admins that use a mixture, what’s the best way to manage Hyper-V containers and VMs?

To understand the management challenges of supporting both containers and VMs, admins need to understand a bit about how Windows Server 2016 works. From a VM standpoint, Windows Server 2016 Hyper-V isn’t that different from the version of Hyper-V included with Windows Server 2012 R2. Microsoft introduced a few new features, as with every new release, but the tools and techniques used to create and manage VMs were largely unchanged.

In addition to being able to host VMs, Windows Server 2016 includes native support for two different types of containers: Windows Server containers and Hyper-V containers. Windows Server containers and the container host share the same kernel. Hyper-V containers differ from Windows Server containers in that Hyper-V containers run inside a special-purpose VM. This enables kernel-level isolation between containers and the container host.

Hyper-V management

When Microsoft created Hyper-V containers, it faced something of a quandary with regard to the management interface.

The primary tool for managing Hyper-V VMs is Hyper-V Manager — although PowerShell and System Center Virtual Machine Manager (SCVMM) are also viable management tools. This has been the case ever since the days of Windows Server 2008. Conversely, admins in the open source world used containers long before they ever showed up in Windows, and the Docker command-line interface has become a standard for container management.

Ultimately, Microsoft chose to support Hyper-V Manager as a tool for managing Hyper-V hosts and Hyper-V VMs, but not containers. Likewise, Microsoft chose to support the use of Docker commands for container management.

Management best practices

Although Hyper-V containers and VMs both use the Hyper-V virtualization engine, admins should treat containers and VMs as two completely different types of resources. While it’s possible to manage Hyper-V containers and VMs through PowerShell, most Hyper-V admins seem to prefer using a GUI-based management tool for managing Hyper-V VMs. Native GUI tools, such as Hyper-V Manager and SCVMM, don’t support container management.

As admins work to figure out the best way to manage Hyper-V containers and VMs, it’s important for them to remember that both depend on an underlying host.

Admins who wish to manage their containers through a GUI should consider using one of the many interfaces that are available for Docker. Kitematic is probably the best-known of these interfaces, but there are third-party GUI interfaces for containers that arguably provide a better overall experience.

For example, Datadog offers a dashboard for monitoring Docker containers. Another particularly nice GUI interface for Docker containers is DockStation.

Those who prefer an open source platform should check out the Docker Monitoring Project. This monitoring platform is based on the Kubernetes dashboard, but it has been adapted to work directly with Docker.

As admins work to figure out the best way to manage Hyper-V containers and VMs, it’s important for them to remember that both depend on an underlying host. Although Microsoft doesn’t provide any native GUI tools for managing VMs and containers side by side, admins can use SCVMM to manage all manner of Hyper-V hosts, regardless of whether those servers are hosting Hyper-V VMs or Hyper-V containers.

Admins who have never worked with containers before should spend some time experimenting with containers in a lab environment before attempting to deploy them in production. Although containers are based on Hyper-V, creating and managing containers is nothing like setting up and running Hyper-V VMs. A great way to get started is to install containers on Windows 10.

Dig Deeper on Microsoft Hyper-V management

Kubernetes in Azure eases container deployment duties


With the growing popularity of containers in the enterprise, administrators require assistance to deploy and manage…

“;
}
});

/**
* remove unnecessary class from ul
*/
$(“#inlineregform”).find( “ul” ).removeClass(“default-list”);

/**
* Replace “errorMessageInput” class with “sign-up-error-msg” class
*/
function renameErrorMsgClass() {
$(“.errorMessageInput”).each(function() {
if ($(this).hasClass(“hidden”)) {
$(this).removeClass(“errorMessageInput hidden”).addClass(“sign-up-error-msg hidden”);
} else {
$(this).removeClass(“errorMessageInput”).addClass(“sign-up-error-msg”);
}
});
}

/**
* when validation function is called, replace “errorMessageInput” with “sign-up-error-msg”
* before return
*/
function validateThis(v, form) {
var validateReturn = urValidation.validate(v, form);
renameErrorMsgClass();
return validateReturn;
}

/**
* DoC pop-up window js – included in moScripts.js which is not included in responsive page
*/
$(“#inlineRegistration”).on(“click”,”a.consentWindow”, function(e) {
window.open(this.href, “Consent”, “width=500,height=600,scrollbars=1”);
e.preventDefault();
});

these workloads, particularly in the cloud.

When you consider the growing prevalence of Linux and containers both in Windows Server and in the Azure platform, it makes sense for administrators to get more familiar with how to work with Kubernetes in Azure.

Containers help developers streamline the coding process, while orchestrators give the IT staff a tool to deploy these applications in a cluster. One of the more popular tools, Kubernetes, automates the process of configuring container applications within and on top of Linux across public, private and hybrid clouds.

For companies that prefer to use Azure for container deployments, Microsoft developed the Azure Kubernetes Service (AKS), a hosted control plane, to give administrators an orchestration and cluster management tool for its cloud platform.

Why containers and why Kubernetes?

There are many advantages to containers. Because they share an operating system, containers are lighter than virtual machines (VMs). Patching containers is less onerous than it is for VMs; the administrator just swaps out the base image.

On the development side, containers are more convenient. Containers are not reliant on underlying infrastructure and file systems, so they can move from operating system to operating system without issue.

Kubernetes makes working with containers easier. Most organizations choose containers because they want to virtualize applications and produce them quickly, integrate them with continuous delivery and DevOps style work, and provide them isolation and security from each other.

For many people, Kubernetes represents a container platform where they can run apps, but it can do more than that. Kubernetes is a management environment that handles compute, networking and storage for containers.

Kubernetes acts as much as a PaaS provider as an IaaS, and it also deftly handles moving containers across different platforms. Kubernetes organizes clusters of Linux hosts that run containers, turns them off and on, moves them around hosts, configures them via declarative statements and automates provisioning.

Using Kubernetes in Azure

Clusters are sets of VMs designed to run containerized applications. A cluster holds a master VM and agent nodes or VMs that host the containers.

Microsoft calls AKS self-healing, which means the platform will recover from infrastructure problems automatically.

AKS limits the administrative workload that would be required to run this type of cluster on premises. AKS shares the container workload across the nodes in the cluster and redistributes resources when adding or removing nodes. Azure automatically upgrades and patches AKS.

Microsoft calls AKS self-healing, which means the platform will recover from infrastructure problems automatically. Like other cloud services, Microsoft only charges for the agent pool nodes that run.

Starting up Kubernetes in Azure

The simplest way to provision a new instance of an AKS cluster is to use Azure Cloud Shell, a browser-based command-line environment for working with Azure services and resources.

Azure Cloud Shell works like the Azure CLI, except it’s updated automatically and is available from a web browser. There are many service provider plug-ins enabled by default in the shell.

Azure Cloud Shell session
Starting a PowerShell session in the Azure Cloud Shell

Open Azure Cloud Shell at shell.azure.com. Choose PowerShell and sign in to the account with your Azure subscription. When the session starts, complete the provider registration with these commands:

az provider register -n Microsoft.Network
az provider register -n Microsoft.Storage
az provider register -n Microsoft.Compute
az provider register -n Microsoft.ContainerService

[embedded content]

How to create a Kubernetes cluster on Azure

Next, create a resource group, which will contain the Azure resources in the AKS cluster.

az group create –name AKSCluster –location centralus

Use the following command to create a cluster named AKSCluster1 that will live in the AKSCluster resource group with two associated nodes:

az aks create –resource-group AKSCluster –name AKSCluster1 –node-count 2 –generate-ssh-keys

Next, to use the Kubernetes command-line tool kubectl to control the cluster, get the necessary credentials:

az aks get-credentials –resource-group AKSCluster –name AKSCluster1

Next, use kubectl to list your nodes:

kubectl get nodes

Put the cluster into production with a manifest file

After setting up the cluster, load the applications. You’ll need a manifest file that dictates the cluster’s runtime configuration, the containers to run on the cluster and the services to use.

Developers can create this manifest file along with the appropriate container images and provide them to your operations team, who will import them into Kubernetes or clone them from GitHub and point the kubectl utility to the relevant manifest.

To get more familiar with Kubernetes in Azure, Microsoft offers a tutorial to build a web app that lets people vote for either cats or dogs. The app runs on a couple of container images with a front-end service.

OpenShift on OpenStack aims to ease VM, container management

Virtualization admins increasingly use containers to secure applications, but managing both VMs and containers in the same infrastructure presents some challenges.

IT can spin containers up and down more quickly than VMs, and they require less overhead, so there are several practical uses cases for the technology. Security can be a concern, however, because all containers share the same underlying OS. As such, mission-critical applications are still better suited to VMs.

Using both containers and VMs can be helpful, because they each have their place. Still, adding containers to a traditional virtual infrastructure adds another layer of complexity and management for admins to contend with. The free and open source OpenStack provides infrastructure as a service and VM management, and organizations can run Red Hat’s OpenShift on OpenStack — and other systems — for platform as a service and container management.

Here, Brian Gracely, director of OpenShift product strategy at Red Hat, based in Raleigh, N.C., explains how to manage VMs and containers, and he shares how OpenShift on OpenStack can help.

What are the top challenges of managing both VMs and containers in virtual environments?

Brian Gracely, director of OpenShift product strategy at Red HatBrian Gracely

Brian Gracely: The first one is really around people and existing processes. You have infrastructure teams who, over the years, have become very good at managing VMs and … replicating servers with VMs, and they’ve built a set of operational things around that. When we start having the operations team deal with containers, a couple of things are different. Not all of them are as fluent in Linux as you might expect; containers are [based on] the OS. A lot of the virtualization people, especially in the VMware world, came from a Windows background. So, they have to learn a certain amount about what to do with the OS and how to deal with Linux constructs and commands.

Container environments tend to be more closely tied to people who are doing application developments. Application developers are … making changes to the application more frequently and scaling them up and down. The concept of the environment changing more frequently is sort of new for VM admins.

What is the role of OpenStack in modern data centers where VMs and containers coexist?

Gracely: OpenStack can become either an augmentation of what admins used to do with VMware or a replacement for VMware that gives them all of the VM capabilities they want to have in terms of networking, storage and so forth. In most of those cases, they want to also have hybrid capabilities, across public and private. And they can use OpenShift on OpenStack as that abstraction layer that allows them to run containerized applications and/or VM applications in their own data center.

Then, they’ll run OpenShift in one of the public clouds — Amazon or Azure or Google — and the applications that run in the cloud will end up being containerized on OpenShift. It gives them consistency from what the operations look like, and then there’s a pretty simple way of determining which applications can also run in the public cloud, if necessary.

What OpenShift features are most important to container management?

Gracely: OpenShift is based on Kubernetes technology — the de facto standard for managing containers.

If you’re a virtualization person … it’s essentially like vCenter for containers. It centrally manages policies, it centrally manages deployments of containers, [and] it makes sure that you use your compute resources really efficiently. If a container dies, an application dies, it’s going to be constantly monitoring that and will restart it automatically. Kubernetes at the core of OpenShift is the thing that allows people to manage containers at scale, as opposed to managing them one by one.

What can virtualization admins do to improve their container management skills?

Gracely: Become Linux-literate, Linux-skilled. There are plenty of courses out there that allow you to get familiar with Linux. Container technology, fundamentally, is Linux technology, so that’s a fundamental thing. There are tools like Katacoda, which is an online training system; you just go in through your browser. It gives you a Kubernetes environment to play around with, and there’s also an OpenShift set of trainings and tools that are on there.

Kubernetes is the thing that allows people to manage containers at scale, as opposed to managing them one by one.
Brian Gracelydirector of OpenShift product strategy at Red Hat

How can admins streamline management practices between other systems for VMs and OpenShift for containers?

Gracely: OpenShift runs natively on top of both VMware and OpenStack, so for customers that just want to stay focused on VMs, their world can look pretty much the way it does today. They’re going to provision however many VMs they need, and then give self-service access to the OpenShift platform and allow their developers to place containers on there as necessary. The infrastructure team can simply make sure that it’s highly available, that it’s patched, and if more capacity is necessary, add VMs.

Where we see … things get more efficient is people who don’t want to have silos anymore between the ops team and the development team. They’re either going down a DevOps path or combining them together; they want to merge processes. This is where we see them doing much more around automating environments. So, instead of just statically [building] a bunch of VMs and leaving them alone, they’re using tools like Ansible to provision not only the VMs, but the applications that go on top of those VMs and the local database.

Will VMs and containers continue to coexist, or will containers overtake VMs in the data center?

Gracely: More and more so, we’re seeing customers taking a container-first approach with new applications. But … there’s always going to be a need for good VM management, being able to deliver high performance, high I/O stand-alone applications in VMs. We very much expect to see a lot of applications stay in VMs, especially ones that people don’t expect to need any sort of hybrid cloud environment for, some large databases for I/O reasons, or [applications that], for whatever reason, people don’t want to put in containers. Then, our job is to make sure that, as containers come in, that we can make that one seamless infrastructure.

Windows Server containers and Hyper-V containers explained

A big draw of Windows Server 2016 is the addition of containers that provide similar capabilities to those from…

“;
}
});

/**
* remove unnecessary class from ul
*/
$(“#inlineregform”).find( “ul” ).removeClass(“default-list”);

/**
* Replace “errorMessageInput” class with “sign-up-error-msg” class
*/
function renameErrorMsgClass() {
$(“.errorMessageInput”).each(function() {
if ($(this).hasClass(“hidden”)) {
$(this).removeClass(“errorMessageInput hidden”).addClass(“sign-up-error-msg hidden”);
} else {
$(this).removeClass(“errorMessageInput”).addClass(“sign-up-error-msg”);
}
});
}

/**
* when validation function is called, replace “errorMessageInput” with “sign-up-error-msg”
* before return
*/
function validateThis(v, form) {
var validateReturn = urValidation.validate(v, form);
renameErrorMsgClass();
return validateReturn;
}

/**
* DoC pop-up window js – included in moScripts.js which is not included in responsive page
*/
$(“#inlineRegistration”).on(“click”,”a.consentWindow”, function(e) {
window.open(this.href, “Consent”, “width=500,height=600,scrollbars=1”);
e.preventDefault();
});

leading open source providers. This Microsoft platform actually offers two different types of containers: Windows Server containers and Hyper-V containers. Before you decide which option best meets your needs, take a look at these five quick tips so you have a better understanding of container architecture, deployment and performance management.

Windows Server containers vs. Hyper-V containers

Although Windows Server containers and Hyper-V containers do the same thing and are managed the same way, the level of isolation they provide is different. Windows Server containers share the underlying OS kernel, which makes them smaller than VMs because they don’t each need a copy of the OS. Security can be a concern, however, because if one container is compromised, the OS and all of the other containers could be at risk.

Hyper-V containers and their dependencies reside in Hyper-V VMs and provide an additional layer of isolation. For reference, Hyper-V containers and Hyper-V VMs have different use cases. Containers are typically used for microservices and stateless applications because they are deposable by design and, as such, don’t store persistent data. Hyper-V VMs, typically equipped with virtual hard disks, are better suited to mission-critical applications.

The role of Docker on Windows Server

One key advantage of Docker on Windows is support for container image automation.

In order to package, deliver and manage Windows container images, you need to download and install Docker on Windows Server 2016. Docker Swarm, supported by Windows Server, provides orchestration features that help with cluster creation and workload scheduling. After you install Docker, you’ll need to configure it for Windows, a process that includes selecting secured connections and setting disk paths.

One key advantage of Docker on Windows is support for container image automation. You can use container images for continuous integration cycles because they’re stored as code and can be quickly recreated when need be. You can also download and install a module to extend PowerShell to manage Docker Engine; just make sure you have the latest versions of both Windows and PowerShell before you do so.

Meet Hyper-V container requirements

If you prefer to use Hyper-V containers, make sure you have Server Core or Windows Server 2016 installed, along with the Hyper-V role. There is also a list of minimum resource requirements necessary to run Hyper-V containers. First, you need at least 4 GB of memory for the host VM. You also need a processor with Intel VT-x and at least two virtual processors for the host VM. Unfortunately, nested virtualization doesn’t support Advanced Micro Devices yet.

Although these requirements might not seem extensive, it’s important to carefully consider resource allocation and the workloads you intend to run on Hyper-V containers before deployment. When it comes to container images, you have two different options: a Windows Server Core image and a Nano Server image.

OS components affect both container types

Portability is a key advantage of containers. Because an application and all its dependencies are packaged within the container, it should be easy to deploy on other platforms. Unfortunately, there are different elements that can negatively affect this deployment flexibility. While containers share the underlying OS kernel, they do contain their own OS components, also known as the OS layer. If these components don’t match up with the OS kernel running on the host, the container will most likely be blocked.

The four-level version notation system Microsoft uses includes the major, minor, build and revision levels. Before Windows Server containers or Hyper-V containers will run on a Windows Server host, the major, minor and build levels must match, at minimum. The containers will still start if the revision level doesn’t match, but they might not work properly.

Antimalware tools and container performance

Because of shared components, like those of the OS layer, antimalware tools can affect container performance. The components or layers are shared through the use of placeholders; when those placeholders are read, the reads are redirected to the underlying component. If the container modifies a component, the container replaces the placeholder with the modified one.

Antimalware tools aren’t aware of the redirection and don’t know which components are placeholders and which components are modified, so the same components end up being scanned multiple times. Fortunately, there is a way to make antimalware tools aware of this activity. You can modify the container volume by attaching a parameter to the Create CallbackData flag and checking the Exchange Control Panel (ECP) redirection flags. ECP will then either indicate that the file was opened from a remote layer or that the file was opened from a local layer.

Next Steps

Ensure container isolation and prevent root access

Combine microservices and containers

Maintain high availability with containers and data mirroring