Tag Archives: Containers

Trilio’s Kubernetes data protection enters early access

A global pandemic sidelined the drive for backup for containers, but at least one vendor expects the technology will make a complete recovery.

Trilio launched early access for its TrilioVault for Kubernetes, a platform that enables simple snapshotting, backup and recovery for applications and their associated data and metadata within Kubernetes containers. In beta since November 2019, TrilioVault for Kubernetes is now offered as a 30-day free trial, a free basic edition for up to ten nodes or an enterprise edition that charges on a per-node basis. General availability is planned for the end of May, and Trilio will add features such as advanced retention policies and backup script injection.

TrilioVault for Kubernetes is agentless, packaged as an Operator and deployed as a Custom Resource Definition in Kubernetes. It enables Kubernetes to natively perform snapshotting and recovery for use cases such as backup, disaster recovery and migration of backup data between clouds and test/dev environments. The platform is compatible with any storage, whether NFS, CSI or S3, and users can point to AWS, Azure, Google Cloud, IBM Cloud or a private cloud as the backup target. It backs up applications provisioned by Labels, Helm or Operator within Kubernetes or Red Hat OpenShift. The product is OpenShift Operator certified and can be installed from OperatorHub.

There has been a growing number of organizations developing and deploying applications in the cloud, said Trilio CEO David Safaii. Trilio’s first offering, TrilioVault for OpenStack, went after the cloud-native data protection market, and Safaii said there’s now rising demand for a similar product for Kubernetes applications.

A report recently published by GigaOm found that Kubernetes adoption is on the rise, as enterprises are developing and testing applications in containers. This has led to more stateful applications living in containers, and a need to back them up.

Safaii said there are generally two ways companies protect Kubernetes environments, and both have flaws. Custom scripting can achieve backup and recovery, but requires a lot of maintenance and updates as cloud-native applications continually change. Legacy backup vendors deploy agents and focus on their own proprietary tools and storage. Both methods are missing what Safaii considers the two essential components of cloud data protection: an open, universal backup schema and infinite scalability.

“People are not happy with what’s out there now,” Safaii said.

Vendors such as Asigra, Storware, Cohesity and Storidge have products that can perform agentless backups for Kubernetes.

Data protection vendors had been getting into container backup before the COVID-19 pandemic forced them to switch focus to endpoint and remote backup. Marc Staimer, president of Dragon Slayer Consulting, said there’s been movement on containers, both increased company adoption of containers and rising demand for protecting them, but nobody cares now because of COVID-19.

screenshot of TrilioVault for Kubernetesin OpenShift OperatorHub
TrilioVault for Kubernetes is OpenShift certified and can be installed from OperatorHub.

Staimer said before the pandemic, he would’ve seen this as Trilio going to market at the right time and addressing a market need just as momentum for it was picking up. Now, he’s not so sure. While he believes containers and container backup will see greater adoption in the future, everyone is more concerned about backing up their laptops right now.

“Containers have sort of taken a backseat for now,” Staimer said. “They would’ve been riding the beginning of the wave, but the wave’s been detoured.”

Christophe Bertrand, senior analyst at Enterprise Strategy Group, similarly said there was a bright future for containers. Containers are part of a trend toward agile development, and products that understand and protect container environments will be critical. Containers and Kubernetes will displace or replace some of the workloads virtualization and VMware do right now. However, Bertrand conceded that future may be delayed.

“It’s definitely a hot topic that’s been eclipsed right now,” Bertrand said.

Go to Original Article
Author:

Windows Server 2008 end of life means it’s time to move

Windows Server 2008 end of life is here, so will you go to the cloud, use containers or carry on with on-premises Windows Server?

Windows Server 2008 and Server 2008 R2 left support recently, giving administrators one last batch of security updates on January Patch Tuesday. Organizations that have not migrated to a supported platform will not get further security fixes for Server 2008/2008 R2 machines unless they have enrolled in the Extended Security Updates program or moved those workloads into the Microsoft cloud for three additional years of security updates. Organizations that choose to continue without support will roll the dice with machines that now present a liability.

In many instances, a switch to a newer version of Windows Server is not an option. For example, many hospitals run equipment that relies on applications that do not function on a 64-bit operating system, which rules out every currently supported Windows Server OS. In these cases, IT must keep those workloads running but keep them as secure as possible using various methods, such as isolating the machine with a segregated virtual LAN or even pulling the plug by air-gapping those systems.

What works best for your organization is based on many factors, such as cost and the IT department’s level of expertise and comfort level with newer technologies.

For some, a cloudy forecast

The decision to stay with the same version of Server 2008/2008 R2 comes with a price. To enroll in the Extended Security Updates program requires Software Assurance and the cost for the updates annually is about 75% of what a Windows Server license costs.

This expense will motivate some organizations to explore ways to reduce those costs and one alternative is to push those Server 2008/2008 R2 workloads into the Azure cloud. This migration will require some adjustment as the capital expense of an on-premises system migrates to the operational expense used with the cloud consumption model.

Mentioning the cloud word doesn’t fill IT with as much apprehension as it once did, but the move might require some technological gymnastics to get some workloads working when one component, such as the database, needs to stay on premises while the application runs in the cloud.

Some other considerations include increasing the available bandwidth to accommodate the need for lower latency when working with cloud workloads and learning how to patch and do other administrative tasks when the system is in Azure.

Application virtualization is another option

While traditional virtualization is the norm for most Windows shops, there’s a relatively new form of virtualization that is another migration option. Putting a Windows Server 2008/2008 R2 workload into a Docker container might not seem as far-fetched as it did when this technology was in its infancy.

Containers versus VMs
Because each virtual machine uses a guest operating system, VMs use more disk space than a container that shares an underlying operating system.

Microsoft added support for Windows containers on Windows Server 2016 and 2019, as well as the Semi-Annual Channel releases. The migration process puts the legacy application into a container, which then runs on top of a supported Windows Server OS.

Administrators will need to get up to speed with the differences between the two forms of virtualization, and the advantages and disadvantages of migrating a server workload to a containerized application. For example, all the containerized applications run on top of a shared kernel, which might not work in environments with a requirement for kernel isolation for sensitive workloads.

Storage Migration Service eases file server upgrades

Microsoft released Windows Server 2019 with a host of new features, including the Storage Migration Service, which attempts to reduce the friction associated with moving file servers to a newer Windows Server operating system.

One standby for many organizations is the on-premises file server that holds documents, images and other data that employees rely on to do their jobs. The Windows Server 2008 end of life put many in IT in the difficult position of upgrading file servers on this legacy server OS. It’s not as simple as copying all the files over to the new server because there are numerous dependencies associated with stored data that must get carried over and, at the same time, avoid disrupting the business during the migration process.

The Storage Migration Service runs from within Microsoft’s relatively new administrative tool called the Windows Admin Center. The feature is not limited to just shifting to a supported on-premises Windows Server version but will coordinate the move to an Azure File Sync server or a VM that runs in Azure.

Go to Original Article
Author:

McAfee launches security tool Mvision Cloud for Containers

Cybersecurity company McAfee on Tuesday announced McAfee Mvision Cloud for Containers, a product intended to help organizations ensure security and compliance of their cloud container workloads.

Mvision Cloud for Containers integrates container security with McAfee’s cloud access security broker (CASB) and cloud security posture management (CSPM) tools, according to the company.

“Data could … move between SaaS offerings, IaaS custom apps in various CPSs, containers and hybrid clouds. We want security to be consistent and predictable across the places data live and workloads are processed. Integrating CASB and CSPM allows McAfee to provide consistent configuration policies and DLP/malware scanning that does not restrict the flexibility of the cloud,” said John Dodds, a director of product management at McAfee.

According to Andras Cser, vice president and principal analyst for security and risk management at Forrester, when it comes to evaluating a product like Mvision, it’s worth looking at factors such as “price, cost of integration, level of integration between acquired components and coverage of the client’s applications.”

Mvision Cloud uses the zero-trust model application visibility and control capabilities by container security startup NanoSec for container-based deployments in the cloud. McAfee acquired NanoSec in September in a move to expand its container cloud security offerings.

Mvision Cloud for Containers builds on the existing McAfee Mvision Cloud platform, integrating cloud security posture management and vulnerability scanning for container workloads so that security policies can be implemented across different forms of cloud IaaS workloads, according to the company.

Other features of McAfee Mvision Cloud for Containers include:

  • Cloud security posture management: Ensures the container platforms run in accordance with Center for Internet Security and other compliance standards by integrating configuration audit checks to container workloads.
  • Container images vulnerability scanning: Identifies weak or exploitable elements in container images to reduce the application’s risk profile.
  • DevOps integration: Ensures compliance and secures container workloads; executes security audits and vulnerability scanning to identify risk and send security incidents and feedback to developers within the build process; and monitors and prevents configuration drift on production deployments of the container workloads.

Go to Original Article
Author:

VMware’s Bitnami acquisition grows its development portfolio

The rise of containers and the cloud has changed the face of the IT market, and VMware must evolve with it. The vendor has moved out of its traditional data center niche and — with its purchase of software packager Bitnami — has made a push into the development community, a change that presents new challenges and potential. 

Historically, VMware delivered a suite of system infrastructure management tools. With the advent of cloud and digital disruption, IT departments’ focus expanded from monitoring systems to developing applications. VMware has extended its management suite to accommodate this shift, and its acquisition of Bitnami adds new tools that ease application development.

Building applications presents difficulties for many organizations. Developers spend much of their time on application plumbing, writing software that performs mundane tasks — such as storage allocation — and linking one API to another.

Bitnami sought to simplify that work. The company created prepackaged components called installers that automate the development process. Rather than write the code themselves, developers can now download Bitnami system images and plug them into their programs. As VMware delves further into hybrid cloud market territory, Bitnami brings simplified app development to the table.

Torsten Volk, managing research director at Enterprise Management AssociatesTorsten Volk

“Bitnami’s solutions were ahead of their time,” said Torsten Volk, managing research director at Enterprise Management Associates (EMA), a computer consultant based out of Portsmouth, New Hampshire. “They enable developers to bulletproof application development infrastructure in a self-service manner.”

The value Bitnami adds to VMware

Released under the Apache License, Bitnami’s modules contain commonly coupled software applications instead of just bare-bones images. For example, a Bitnami WordPress stack might contain WordPress, a database management system (e.g., MySQL) and a web server (e.g., Apache).

Bitnami takes care of several mundane programming chores. Its keeps all components up-to-date — so if it finds a security problem, it patches that problem — and updates those components’ associated libraries. Bitnami makes its modules available through its Application Catalogue, which functions like an app store.

The company designed its products to run on a wide variety of systems. Bitnami supports Apple OS X, Microsoft Windows and Linux OSes. Its VM features work with VMware ESX and ESXi, VirtualBox and QEMU. Bitnami stacks also are compatible with software infrastructures such as WAMP, MAMP, LAMP, Node.js, Tomcat and Ruby. It supports cloud tools from AWS, Azure, Google Cloud Platform and Oracle Cloud. The installers, too, feature a wide variety of platforms, including Abante Cart, Magento, MediaWiki, PrestaShop, Redmine and WordPress. 

Bitnami seeks to help companies build applications once and run them on many different configurations.

“For enterprise IT, we intend to solve for challenges related to taking a core set of application packages and making them available consistently across teams and clouds,” said Milin Desai, general manager of cloud services at VMware.

Development teams share project work among individuals, work with code from private or public repositories and deploy applications on private, hybrid and public clouds. As such, Bitnami’s flexibility made it appealing to developers — and VMware.

How Bitnami and VMware fit together

[VMware] did not pay a premium for the products, which were not generating a lot of revenue. Instead, they wanted the executives, who are all rock stars in the development community.
Torsten VolkManaging Research Director, EMA

VMware wants to extend its reach from legacy, back-end data centers and appeal to more front-end and cloud developers.

“In the last few years, VMware has gone all in on trying to build out a portfolio of management solutions for application developers,” Volk said. VMware embraced Kubernetes and has acquired container startups such as Heptio to prove it.

Bitnami adds another piece to this puzzle, one that provides a curated marketplace for VMware customers who hope to emphasize rapid application development.

“Bitnami’s application packaging capabilities will help our customers to simplify the consumption of applications in hybrid cloud environments, from on-premises to VMware Cloud on AWS to VMware Cloud Provider Program partner clouds, once the deal closes,” Desai said.

Facing new challenges in a new market

However, the purchase moves VMware out of its traditional virtualized enterprise data center sweet spot. VMware has little name recognition among developers, so the company must build its brand.

“Buying companies like Bitnami and Heptio is an attempt by VMware to gain instant credibility among developers,” Volk said. “They did not pay a premium for the products, which were not generating a lot of revenue. Instead, they wanted the executives, who are all rock stars in the development community.”  

Supporting a new breed of customer poses its challenges. Although VMware’s Bitnami acquisition adds to its application development suite — an area of increasing importance — it also places new hurdles in front of the vendor. Merging the culture of a startup with that of an established supplier isn’t always a smooth process. In addition, VMware has bought several startups recently, so consolidating its variety of entities in a cohesive manner presents a major undertaking.

Go to Original Article
Author:

Get to know data storage containers and their terminology

Data storage containers have become a popular way to create and package applications for better portability and simplicity. Seen by some analysts as the technology to unseat virtual machines, containers have steadily gained more attention as of late, from customers and vendors alike.

Why choose containers and containerization over the alternatives? Containers work on bare-metal systems, cloud instances and VMs, and across Linux and select Windows and Mac OSes. Containers typically use fewer resources than VMs and can bind together application libraries and dependencies into one convenient, deployable unit.

Below, you’ll find key terms about containers, from technical details to specific products on the market. If you’re looking to invest in containerization, you’ll need to know these terms and concepts.

Getting technical

Containerization. With its roots in partitioning, containerization is an efficient data storage strategy that virtually isolates applications, enabling multiple containers to run on one machine but share the same OS. Containers run independent processes in a shared user space and are capable of running on different environments, which makes them a flexible alternative to virtual machines.

The benefits of containerization include reduced overhead on hardware and portability, while concerns include the security of data stored on containers. With all of the containers running under one OS, if one container is vulnerable, the others are as well.

Container management software. As the name indicates, container management software is used to simplify, organize and manage containers. Container management software automates container creation, destruction, deployment and scaling and is particularly helpful in situations with large numbers of containers on one OS. However, the orchestration aspect of management software is complex and setup can be difficult.

Products in this area include Kubernetes, an open source container orchestration software; Apache Mesos, an open source project that manages compute clusters; and Docker Swarm, a container cluster management tool.

Persistent storage. In order to be persistent, a storage device must retain data after being shut off. While persistence is essentially a given when it comes to modern storage, the rise of containerization has brought persistent storage back to the forefront.

Containers did not always support persistent storage, which meant that data created with a containerized app would disappear when the container was destroyed. Luckily, storage vendors have made enough advances in container technology to solve this issue and retain data created on containers.

Stateful app. A stateful app saves client data from the activities of one session for use in the next session. Most applications and OSes are stateful, but because stateful apps didn’t scale well in early cloud architectures, developers began to build more stateless apps.

With a stateless app, each session is carried out as if it was the first time, and responses aren’t dependent upon data from a previous session. Stateless apps are better suited to cloud computing, in that they can be more easily redeployed in the event of a failure and scaled out to accommodate changes.

However, containerization allows files to be pulled into the container during startup and persist somewhere else when containers stop and start. This negates the issue of stateful apps becoming unstable when introduced to a stateless cloud environment.

Container vendors and products

While there is one vendor undoubtedly ahead of the pack when it comes to modern data storage containers, the field has opened up to include some big names. Below, we cover just a few of the vendors and products in the container space.

Docker. Probably the most synonymous with data storage containers, Docker is even credited with bringing about the container renaissance in the IT space. Docker’s platform is open source, which enables users to register and share containers over various hosts in both private and public environments. In recent years, Docker made containers accessible and offers various editions of containerization technology.

When you refer to Docker, you likely mean either the company itself, Docker Inc., or the Docker Engine. Initially developed for Linux systems, the Docker Engine had version updates extended to operate natively on both Windows and Apple OSes. The Docker Engine supports tasks and workflows involved in building, shipping and running container-based applications.

Container Linux. Originally referred to as CoreOS Linux, Container Linux by CoreOS is an open source OS that deploys and manages the applications within containers. Container Linux is based on the Linux kernel and is designed for massive scale and minimal overhead. Although, Container Linux is open source, CoreOS sells support for the OS. Acquired by Red Hat in 2018, CoreOS develops open source tools and components.

Azure Container Instances (ACI). With ACI, developers can deploy data storage containers on the Microsoft Azure cloud. Organizations can spin up a new container via the Azure portal or command-line interface, and Microsoft automatically provisions and scales the underlying compute resources. ACI also supports standard Docker images and Linux and Windows containers.

Microsoft Windows containers. Windows containers are abstracted and portable operating environments supported by the Microsoft Windows Server 2016 OS. They can be managed with Docker and PowerShell and support established Windows technologies. Along with Windows Containers, Windows Server 2016 also supports Hyper-V containers.

VMware vSphere Integrated Containers (VIC). While VIC can refer to individual container instances, it is also a platform that deploys and manages containers within VMs from within VMware’s vSphere VM management software. Previewed under the name Project Bonneville, VMware’s play on containers comes with the virtual container host, which represents tools and hardware resources that create and control container services.

Go to Original Article
Author:

Bringing Device Support to Windows Server Containers

When we introduced containers to Windows with the release of Windows Server 2016, our primary goal was to support traditional server-oriented applications and workloads. As time has gone on, we’ve heard feedback from our users about how certain workloads need access to peripheral devices—a problem when you try to wrap those workloads in a container. We’re introducing support for select host device access from Windows Server containers, beginning in Insider Build 17735 (see table below).

We’ve contributed these changes back to the Open Containers Initiative (OCI) specification for Windows. We will be submitting changes to Docker to enable this functionality soon. Watch the video below for a simple example of this work in action (hint: maximize the video).

What’s Happening

To provide a simple demonstration of the workflow, we have a simple client application that listens on a COM port and reports incoming integer values (powershell console on the right). We did not have any devices on hand to speak over physical COM, so we ran the application inside of a VM and assigned the VM’s virtual COM port to the container. To mimic a COM device, an application was created to generate random integer values and send it over a named pipe to the VM’s virtual COM port (this is the powershell console on the left).

As we see in the video at the beginning, if we do not assign COM ports to our container, when the application runs in the container and tries to open a handle to the COM port, it fails with an IOException (because as far as the container knew, the COM port didn’t exist!). On our second run of the container, we assign the COM port to the container and the application successfully gets and prints out the incoming random ints generated by our app running on the host.

How It Works

Let’s look at how it will work in Docker. From a shell, a user will type:

docker run --device="/"

For example, if you wanted to pass a COM port to your container:

docker run --device="class/86E0D1E0-8089-11D0-9CE4-08003E301F73" mcr.microsoft.com/windowsservercore-insider:latest

The value we’re passing to the device argument is simple: it looks for an IdType and an Id. For this coming release of Windows , we only support an IdType of “class”. For Id, this is  a device interface class GUID. The values are delimited by a slash, “/”.  Whereas  in Linux a user assigns individual devices by specifying a file path in the “/dev/” namespace, in Windows we’re adding support for a user to specify an interface class, and all devices which identify as implementing this class   will be plumbed into the container.

If a user wants to specify multiple classes to assign to a container:

docker run --device="class/86E0D1E0-8089-11D0-9CE4-08003E301F73" --device="class/DCDE6AF9-6610-4285-828F-CAAF78C424CC" --device="…" mcr.microsoft.com/windowsservercore-insider:latest

What are the Limitations?

Process isolation only: We only support passing devices to containers running in process isolation; Hyper-V isolation is not supported, nor do we support host device access for Linux Containers on Windows (LCOW).

We support a distinct list of devices: In this release, we targeted enabling a specific set of features and a specific set of host device classes. We’re starting with simple buses. The complete list that we currently support  is  below.

Device Type Interface Class  GUID
GPIO 916EF1CB-8426-468D-A6F7-9AE8076881B3
I2C Bus A11EE3C6-8421-4202-A3E7-B91FF90188E4
COM Port 86E0D1E0-8089-11D0-9CE4-08003E301F73
SPI Bus DCDE6AF9-6610-4285-828F-CAAF78C424CC

Stay tuned for a Part 2 of this blog that explores the architectural decisions we chose to make in Windows to add this support.

What’s Next?

We’re eager to get your feedback. What specific devices are most interesting for you and what workload would you hope to accomplish with them? Are there other ways you’d like to be able to access devices in containers? Leave a comment below or feel free to tweet at me.

Cheers,

Craig Wilhite (@CraigWilhite)

Manage Hyper-V containers and VMs with these best practices

Containers and VMs should be treated as the separate instance types they are, but there are specific management strategies that work for both that admins should incorporate.


Containers and VMs are best suited to different workload types, so it makes sense that IT administrators would…

“;
}
});

/**
* remove unnecessary class from ul
*/
$(“#inlineregform”).find( “ul” ).removeClass(“default-list”);

/**
* Replace “errorMessageInput” class with “sign-up-error-msg” class
*/
function renameErrorMsgClass() {
$(“.errorMessageInput”).each(function() {
if ($(this).hasClass(“hidden”)) {
$(this).removeClass(“errorMessageInput hidden”).addClass(“sign-up-error-msg hidden”);
} else {
$(this).removeClass(“errorMessageInput”).addClass(“sign-up-error-msg”);
}
});
}

/**
* when validation function is called, replace “errorMessageInput” with “sign-up-error-msg”
* before return
*/
function validateThis(v, form) {
var validateReturn = urValidation.validate(v, form);
renameErrorMsgClass();
return validateReturn;
}

/**
* DoC pop-up window js – included in moScripts.js which is not included in responsive page
*/
$(“#inlineRegistration”).on(“click”,”a.consentWindow”, function(e) {
window.open(this.href, “Consent”, “width=500,height=600,scrollbars=1”);
e.preventDefault();
});

use both in their virtual environments, but that adds another layer of complexity to consider.

One of the most notable features introduced in Windows Server 2016 was support for containers. At the time, it seemed that the world was rapidly transitioning away from VMs in favor of containers, so Microsoft had little choice but to add container support to its flagship OS.

Today, organizations use both containers and VMs. But for admins that use a mixture, what’s the best way to manage Hyper-V containers and VMs?

To understand the management challenges of supporting both containers and VMs, admins need to understand a bit about how Windows Server 2016 works. From a VM standpoint, Windows Server 2016 Hyper-V isn’t that different from the version of Hyper-V included with Windows Server 2012 R2. Microsoft introduced a few new features, as with every new release, but the tools and techniques used to create and manage VMs were largely unchanged.

In addition to being able to host VMs, Windows Server 2016 includes native support for two different types of containers: Windows Server containers and Hyper-V containers. Windows Server containers and the container host share the same kernel. Hyper-V containers differ from Windows Server containers in that Hyper-V containers run inside a special-purpose VM. This enables kernel-level isolation between containers and the container host.

Hyper-V management

When Microsoft created Hyper-V containers, it faced something of a quandary with regard to the management interface.

The primary tool for managing Hyper-V VMs is Hyper-V Manager — although PowerShell and System Center Virtual Machine Manager (SCVMM) are also viable management tools. This has been the case ever since the days of Windows Server 2008. Conversely, admins in the open source world used containers long before they ever showed up in Windows, and the Docker command-line interface has become a standard for container management.

Ultimately, Microsoft chose to support Hyper-V Manager as a tool for managing Hyper-V hosts and Hyper-V VMs, but not containers. Likewise, Microsoft chose to support the use of Docker commands for container management.

Management best practices

Although Hyper-V containers and VMs both use the Hyper-V virtualization engine, admins should treat containers and VMs as two completely different types of resources. While it’s possible to manage Hyper-V containers and VMs through PowerShell, most Hyper-V admins seem to prefer using a GUI-based management tool for managing Hyper-V VMs. Native GUI tools, such as Hyper-V Manager and SCVMM, don’t support container management.

As admins work to figure out the best way to manage Hyper-V containers and VMs, it’s important for them to remember that both depend on an underlying host.

Admins who wish to manage their containers through a GUI should consider using one of the many interfaces that are available for Docker. Kitematic is probably the best-known of these interfaces, but there are third-party GUI interfaces for containers that arguably provide a better overall experience.

For example, Datadog offers a dashboard for monitoring Docker containers. Another particularly nice GUI interface for Docker containers is DockStation.

Those who prefer an open source platform should check out the Docker Monitoring Project. This monitoring platform is based on the Kubernetes dashboard, but it has been adapted to work directly with Docker.

As admins work to figure out the best way to manage Hyper-V containers and VMs, it’s important for them to remember that both depend on an underlying host. Although Microsoft doesn’t provide any native GUI tools for managing VMs and containers side by side, admins can use SCVMM to manage all manner of Hyper-V hosts, regardless of whether those servers are hosting Hyper-V VMs or Hyper-V containers.

Admins who have never worked with containers before should spend some time experimenting with containers in a lab environment before attempting to deploy them in production. Although containers are based on Hyper-V, creating and managing containers is nothing like setting up and running Hyper-V VMs. A great way to get started is to install containers on Windows 10.

Dig Deeper on Microsoft Hyper-V management

Kubernetes in Azure eases container deployment duties


With the growing popularity of containers in the enterprise, administrators require assistance to deploy and manage…

“;
}
});

/**
* remove unnecessary class from ul
*/
$(“#inlineregform”).find( “ul” ).removeClass(“default-list”);

/**
* Replace “errorMessageInput” class with “sign-up-error-msg” class
*/
function renameErrorMsgClass() {
$(“.errorMessageInput”).each(function() {
if ($(this).hasClass(“hidden”)) {
$(this).removeClass(“errorMessageInput hidden”).addClass(“sign-up-error-msg hidden”);
} else {
$(this).removeClass(“errorMessageInput”).addClass(“sign-up-error-msg”);
}
});
}

/**
* when validation function is called, replace “errorMessageInput” with “sign-up-error-msg”
* before return
*/
function validateThis(v, form) {
var validateReturn = urValidation.validate(v, form);
renameErrorMsgClass();
return validateReturn;
}

/**
* DoC pop-up window js – included in moScripts.js which is not included in responsive page
*/
$(“#inlineRegistration”).on(“click”,”a.consentWindow”, function(e) {
window.open(this.href, “Consent”, “width=500,height=600,scrollbars=1”);
e.preventDefault();
});

these workloads, particularly in the cloud.

When you consider the growing prevalence of Linux and containers both in Windows Server and in the Azure platform, it makes sense for administrators to get more familiar with how to work with Kubernetes in Azure.

Containers help developers streamline the coding process, while orchestrators give the IT staff a tool to deploy these applications in a cluster. One of the more popular tools, Kubernetes, automates the process of configuring container applications within and on top of Linux across public, private and hybrid clouds.

For companies that prefer to use Azure for container deployments, Microsoft developed the Azure Kubernetes Service (AKS), a hosted control plane, to give administrators an orchestration and cluster management tool for its cloud platform.

Why containers and why Kubernetes?

There are many advantages to containers. Because they share an operating system, containers are lighter than virtual machines (VMs). Patching containers is less onerous than it is for VMs; the administrator just swaps out the base image.

On the development side, containers are more convenient. Containers are not reliant on underlying infrastructure and file systems, so they can move from operating system to operating system without issue.

Kubernetes makes working with containers easier. Most organizations choose containers because they want to virtualize applications and produce them quickly, integrate them with continuous delivery and DevOps style work, and provide them isolation and security from each other.

For many people, Kubernetes represents a container platform where they can run apps, but it can do more than that. Kubernetes is a management environment that handles compute, networking and storage for containers.

Kubernetes acts as much as a PaaS provider as an IaaS, and it also deftly handles moving containers across different platforms. Kubernetes organizes clusters of Linux hosts that run containers, turns them off and on, moves them around hosts, configures them via declarative statements and automates provisioning.

Using Kubernetes in Azure

Clusters are sets of VMs designed to run containerized applications. A cluster holds a master VM and agent nodes or VMs that host the containers.

Microsoft calls AKS self-healing, which means the platform will recover from infrastructure problems automatically.

AKS limits the administrative workload that would be required to run this type of cluster on premises. AKS shares the container workload across the nodes in the cluster and redistributes resources when adding or removing nodes. Azure automatically upgrades and patches AKS.

Microsoft calls AKS self-healing, which means the platform will recover from infrastructure problems automatically. Like other cloud services, Microsoft only charges for the agent pool nodes that run.

Starting up Kubernetes in Azure

The simplest way to provision a new instance of an AKS cluster is to use Azure Cloud Shell, a browser-based command-line environment for working with Azure services and resources.

Azure Cloud Shell works like the Azure CLI, except it’s updated automatically and is available from a web browser. There are many service provider plug-ins enabled by default in the shell.

Azure Cloud Shell session
Starting a PowerShell session in the Azure Cloud Shell

Open Azure Cloud Shell at shell.azure.com. Choose PowerShell and sign in to the account with your Azure subscription. When the session starts, complete the provider registration with these commands:

az provider register -n Microsoft.Network
az provider register -n Microsoft.Storage
az provider register -n Microsoft.Compute
az provider register -n Microsoft.ContainerService

[embedded content]

How to create a Kubernetes cluster on Azure

Next, create a resource group, which will contain the Azure resources in the AKS cluster.

az group create –name AKSCluster –location centralus

Use the following command to create a cluster named AKSCluster1 that will live in the AKSCluster resource group with two associated nodes:

az aks create –resource-group AKSCluster –name AKSCluster1 –node-count 2 –generate-ssh-keys

Next, to use the Kubernetes command-line tool kubectl to control the cluster, get the necessary credentials:

az aks get-credentials –resource-group AKSCluster –name AKSCluster1

Next, use kubectl to list your nodes:

kubectl get nodes

Put the cluster into production with a manifest file

After setting up the cluster, load the applications. You’ll need a manifest file that dictates the cluster’s runtime configuration, the containers to run on the cluster and the services to use.

Developers can create this manifest file along with the appropriate container images and provide them to your operations team, who will import them into Kubernetes or clone them from GitHub and point the kubectl utility to the relevant manifest.

To get more familiar with Kubernetes in Azure, Microsoft offers a tutorial to build a web app that lets people vote for either cats or dogs. The app runs on a couple of container images with a front-end service.