Tag Archives: workloads

Windows troubleshooting tools to improve VM performance

Whether virtualized workloads stay on premises or move to the cloud, support for those VMs remains in the data center with the administrator.

When virtualized workloads don’t perform as expected, admins need to roll up their sleeves and break out the Windows troubleshooting tools. Windows has always had some level of built-in diagnostic ability, but it only goes so deep.

Admins need to stay on top of ways to analyze ailing VMs, but they also need to find ways to trim deployments to control resource use and costs for cloud workloads.

VM Fleet adds stress to your storage

VM Fleet tests the performance of your storage infrastructure by simulating virtual workloads. VM Fleet uses PowerShell to create a collection of VMs and run a stress test against the allocated storage.

This process verifies that your storage meets expectations before deploying VMs to production. VM Fleet doesn’t help troubleshoot issues, but it helps confirm the existing performance specifications before you ramp up your infrastructure. After the VMs are in place, you can use VM Fleet to perform controlled tests of storage auto-tiering and other technologies designed to adjust workloads during increased storage I/O.

VM Fleet tests the performance of your storage infrastructure by simulating virtual workloads.

Sysinternals utilities offer deeper insights

Two Windows troubleshooting tools from the Microsoft Sysinternals collection, Process Explorer and Process Monitor, should be staples for any Windows admin.

Process Explorer gives you in-depth detail, including the dynamic link library and memory mapped files loaded by a process. Process Explorer also lets you dig in deep to uncover issues rather than throwing more resources at an application and, thus, masking the underlying problem.

Process Explorer
Process Explorer lets administrators do a technical deep dive into Windows processes that the Task Manager can’t provide.

Process Monitor captures real-time data of process activity, and Registry and file system changes on Windows systems. It also provides detailed information on the process trees.

Administrators can use Process Monitor’s search and filtering functions to help administrator focus on particular events that occur over a longer period of time.

VMMap and RAMMap detail the memory landscape

Another Sysinternals tool called VMMap shows what types of virtual memory is assigned to a process and its committed memory, which is the virtual memory reserved by the operating system. This tool shows where allocated memory is used with a visual presentation.

VMMap measurements
VMMap shows how the operating system maps physical memory and uses memory in the virtual space to help administrators analyze how applications work with memory resources.

VMMap doesn’t check the hypervisor layer, but it does detail virtual memory use provided by the OS. Combined with other tools that view the hypervisor, VMMap gives a complete picture of the applications’ memory usage.

Another tool called RAMMap is similar to VMMap, but it works at the operating system level rather than the process level. Administrators can use both tools to get a complete picture of how applications are getting and using the memory.

BgInfo puts pertinent information on display

BgInfo is a small Sysinternals utility that displays selected system information on the desktop, such as the machine name, IP address, patch version and storage information.

While it’s not difficult to find these settings, making them more visible can help when you log into multiple VMs in a short amount of time. It’s also helpful to avoid installations on the wrong VM or even rebooting the wrong VM.

Notre Dame uses N2WS Cloud Protection Manager for backup

Coinciding with its decision to eventually close its data center and migrate most of its workloads to the public cloud, the University of Notre Dame’s IT team switched to cloud-native data protection.

Notre Dame, based in Indiana, began its push to move its business-critical applications and workloads to Amazon Web Services (AWS) in 2014. Soon after, the university chose N2WS Cloud Protection Manager to handle backup and recovery.

Now, 80% of the applications used daily by faculty members and students, as well as the data associated with those services, lives on the cloud. The university protects more than 600 AWS instances, and that number is growing fast.

In a recent webinar, Notre Dame systems engineer Aaron Wright talked about the journey of moving a whopping 828 applications to the cloud, and protecting those apps and their data.  

N2WS, which was acquired by Veeam earlier this year, is a provider of cloud-native, enterprise backup and disaster recovery for AWS. The backup tool is available through the AWS Marketplace.

Wright said Notre Dame’s main impetus for migrating to the cloud was to lower costs. Moving services to the cloud would reduce the need for hardware. Wright said the goal is to eventually close the university’s on-premises primary data center.

“We basically put our website from on premises to the AWS account and transferred the data, saw how it worked, what we could do. … As we started to see the capabilities and cost savings [of the cloud], we were wondering what we could do to put not just our ‘www’ services on the cloud,” he said.

Wright said Notre Dame plans to move 90% of its applications to the cloud by the end of 2018. “The data center is going down as we speak,” he said.

We looked at what it would cost us to build our own backup software and estimated it would cost 4,000 hours between two engineers.
Aaron Wrightsystems engineer, Notre Dame

As a research organization that works on projects with U.S. government agencies, Notre Dame owns sensitive data. Wright saw the need for a centralized backup software to protect that data, and found N2WS Cloud Protection Manager through AWS Marketplace. Wright could not find many good commercial options for protecting that cloud data.

“We looked at what it would cost us to build our own backup software and estimated it would cost 4,000 hours between two engineers,” he said. By comparison, Wright said his team deployed Cloud Protection Manger in less than an hour.

Wright said N2WS Cloud Protection Manager rescued Notre Dame’s data at least twice since the installation. One came after Linux machines failed to boot after application of a patch, and engineers restored data from snapshots within five minutes. Wright said his team used the snapshots to find and detach a corrupted Amazon Elastic Block Store volume, and then manually created and attached a new volume.

In another incident, Wright said the granularity of the N2WS Cloud Protection Manager backup capabilities proved valuable.

“Back in April-May 2018, we had to do a single-file restore through Cloud Protection Manager. Normally, we would have to have taken the volume and recreated a 300-gig volume,” he said. Locating and restoring that single file so quickly allowed him to resolve the incident within five minutes.

Silver Peak SD-WAN adds service chaining, partners for cloud security

Silver Peak boosted its software-defined WAN security for cloud-based workloads with the introduction of three security partners.

Silver Peak Unity EdgeConnect customers can now add security capabilities from Forcepoint, McAfee and Symantec for layered security in their Silver Peak SD-WAN infrastructure, the vendor said in a statement. The three security newcomers join existing Silver Peak partners Check Point, Fortinet, OPAQ Networks, Palo Alto Networks and Zscaler.

Silver Peak SD-WAN allows customers to filter application traffic that travels to and from cloud-based workloads through security processes from third-party security partners. Customers can insert virtual network functions (VNFs) through service chaining wherever they need the capabilities, which can include traffic inspection and verification, distributed denial-of-service protection and next-generation firewalls.

These partnership additions build on Silver Peak’s recent update to incorporate a drag-and-drop interface for service chaining and enhanced segmentation capabilities. For example, Silver Peak said a typical process starts with customers defining templates for security policies that specify segments for users and applications. This segmentation can be created based on users, applications or WAN services — all within Silver Peak SD-WAN’s Unity Orchestrator.

Once the template is complete, Silver Peak SD-WAN launches and applies the security policies for those segments. These policies can include configurations for traffic steering, so specific traffic automatically travels through certain security VNFs, for example. Additionally, Silver Peak said customers can create failover procedures and policies for user access.

Enterprises are increasingly moving their workloads to public cloud and SaaS environments, such as Salesforce or Microsoft Office 365. Securing that traffic — especially traffic that travels directly over broadband internet connections — remains top of mind for IT teams, however. By service chaining security functions from third-party security companies, Silver Peak SD-WAN customers can access those applications more securely, the company said.

Silver Peak SD-WAN holds 12% of the $162 million SD-WAN market, according to a recent IHS Markit report, which ranks the vendor third after VMware-VeloCloud and Aryaka.

ONF pinpoints four technology areas to develop

The Open Networking Foundation unveiled four new supply chain partners that are working to develop technology reference designs based on ONF’s strategic plan. Along with the four partners — Adtran, Dell EMC, Edgecore Networks and Juniper Networks — ONF finalized the focus areas for the initial reference designs.

ONF’s reference designs provide blueprints to follow while building open source platforms that use multiple components, the foundation said in a statement. While the broad focus for these blueprints looks at edge cloud, ONF targeted four specific technology areas:

  • SDN-enabled broadband access. This reference design is based on a variant of the Residential Central Office Re-architected as a Datacenter project, which is designed to virtualize residential access networks. ONF’s project likewise supports virtualized access technologies.
  • Network functions virtualization fabric. This blueprint develops work on leaf-spine data center fabric for edge applications.
  • Unified programmable and automated network. ONF touts this as a next-generation SDN reference design that uses the P4 language for data plane programmability.
  • Open disaggregated transport network. This reference design focuses on open multivendor optical networks.

Adtran, Dell EMC, EdgeCore and Juniper each apply its own technology expertise to these reference design projects, ONF said. Additionally, as supply chain partners, they’ll aid operators in assembling deployment environments based on the reference designs.

Hybrid cloud security architecture requires rethinking

Cloud security isn’t for the squeamish. Protecting cloud-based workloads and designing a hybrid cloud security architecture has become a more difficult challenge than first envisioned, said Jon Oltsik, an analyst at Enterprise Strategy Group in Milford, Mass.

“The goal was simple,” he said. Enterprises wanted the same security they had for their internal workloads to be extended to the cloud.

But using existing security apps didn’t work out so well. In response, enterprises tried to concoct their own, but that meant the majority of companies had separate security foundations for their on-premises and cloud workloads, Oltsik said.

The answer in creating a robust hybrid cloud security architecture is central policy management, where all workloads are tracked, policies and rules applied and networking components displayed in a centralized console. Firewall and security vendors are beginning to roll out products supporting this strategy, Oltsik said, but it’s still incumbent upon CISOs to proceed carefully.

“The move to central network security policy management is a virtual certainty, but which vendors win or lose in this transition remains to be seen.”

Read the rest of what Oltsik had to say about centralized cloud security.

User experience management undergoing a shift

User experience management, or UEM, is a more complex concept than you may realize.

Dennis Drogseth, an analyst at Enterprise Management Associates in Boulder, Colo., described the metamorphosis of UEM, debunking the notion that the methodology is merely a subset of application performance management.

Instead, Drogseth said, UEM is multifaceted, encompassing application performance, business impact, change management, design, user productivity and service usage.

According to EMA research, over the last three years the two most important areas for UEM is application performance and portfolio planning and optimization. Valuable insights can be provided by UEM to assist both IT and business.

One question surrounding UEM is whether it falls into the realm of IT or business. In years past EMA data suggested 20% of networking staffers considered UEM a business concern, 21% an IT concern and 59% said UEM should be equally an IT and business concern. Drogseth agreed wholeheartedly with the latter group.

Drogseth expanded on the usefulness of UEM in his blog, including how UEM is important to DevOps and creating an integrated business strategy.

Mixed LPWAN results, but future could be bright

GlobalData analyst Kitty Weldon examined the evolving low-power WAN market in the wake of the 2018 annual conference in London.

Mobile operators built out their networks for LPWAN in 2017, Weldon said,  and are now starting to look for action. Essentially every internet of things (IoT) service hopped on the LPWAN bandwagon; now they await the results.

So far, there have been 48 launches by 26 operators.

The current expectation remains lowered costs and improved battery life will eventually usher in thousands of new low-bandwidth IoT devices connecting to LPWANs. However, Weldon notes that it’s still the beginning of the LPWAN era, and right now feelings are mixed.

“Clearly, there is some concern in the industry that the anticipated massive uptake of LPWANs will not be realized as easily as they had hoped, but the rollouts continue and optimism remains, tempered with realistic concerns about how best to monetize the investments.”

Read more of what Weldon had to say here.

Azure Backup service adds layer of data protection

more important to have a solid backup strategy for company data and workloads. Microsoft’s Azure Backup service has matured into a product worth considering due to its centralized management and ease of use.

Whether it’s ransomware or other kinds of malware, the potential for data corruption is always lurking. That means that IT admins need a way to streamline backup procedures with the added protection and high availability made possible by the cloud.

Azure Backup protects on-premises workloads — SharePoint, SQL Server, Exchange, file servers, client machines, VMs, and cloud resources like infrastructure-as-a-service VMs — into one recovery vault with solid data protection and restore capabilities. Administrators can monitor and start backup and recovery activities from a single Azure-based portal. After the initial setup, this arrangement lightens the burden on IT because off site backups require minimal time and effort to maintain.

How Azure Backup works

The Azure Backup service stores data in what Microsoft calls a recovery vault, which is the central storage locker for the service whether the backup targets are in Azure or on premises.

Whether it’s ransomware or other kinds of malware, the potential for data corruption is always lurking.

The administrator needs to create the recovery vault before the Azure Backup service can be used. From the Azure console, select All services, type in Recovery Services and select Recovery Services vaults from the menu. Click Add, give it a name, associate it with an Azure subscription, choose a resource group and location, and click Create.

From there, to back up on-premises Windows Server machines, open the vault and click the Backup button. Azure will prompt for certain information: whether the workload is on premises or in the cloud and what to back up — files and folders, VMs, SQL Server, Exchange, SharePoint instances, system state information, and data to kick off a bare-metal recovery. When this is complete, click the Prepare Infrastructure link.

[embedded content]

Configure backup for a Windows machine

The Microsoft Azure Recovery Services Agent (MARS) handles on-premises backups. Administrators download the MARS agent from the Prepare Infrastructure link — which also supplies the recovery vault credentials — and install it on the machines to protect. MARS picks up the recovery vault credentials to link the MARS agent instances of the on-premises machine to the Azure subscription and attendant recovery vault.

Azure Backup pricing

Microsoft determines Azure Backup pricing based on two components: the number of protected VMs or other instances — Microsoft charges for each discrete item to back up — and the amount of backup data stored within the service. The monthly pricing is:

  • for instances up to 50 GB, each instance is $5 per month, plus storage consumed;
  • for instances more than 50 GB, but under 500 GB, each instance is $10, plus storage consumed; and
  • for instances more than 500 GB, each instance is $10 per nearest 500 GB increment, plus storage consumed.

Microsoft bases its storage prices on block blob storage rates, which vary based on the Azure region. While it’s less expensive to use locally redundant blobs than geo-redundant blobs, local blobs are less fault-tolerant. Restore operations are free; Azure does not charge for outbound traffic from Azure to the local network.

Pros and cons of the Azure Backup service

The service has several features that are beneficial to the enterprise:

  • There is support to back up on-premises VMware VMs. Even though Azure is a Microsoft cloud service, the Azure Backup product will take VMware VMs as they are and back them up. It’s possible to install the agent inside the VM on the Windows Server workload, but it’s neater and cleaner to just back up the VM.
  • Administrators manage all backups from one console regardless of the target location. Microsoft continually refines the management features in the portal, which is very simple to use.
  • Azure manages storage needs and automatically adjusts as required. This avoids the challenges and capacity limits associated with on-premises backup tapes and hard drives.

The Azure Backup service isn’t perfect, however.

  • It requires some effort to understand pricing. Organizations must factor in what it protects and how much storage those instances will consume.
  • The Azure Backup service supports Linux, but it requires the use of a customized copy of System Center Data Protection Manager (DPM), which is more laborious compared to the simplicity and ease of MARS.
  • Backing up Exchange, SharePoint and SQL workloads requires the DPM version that supports those products. Microsoft includes it with the service costs, so there’s no separate licensing fee, but it still requires more work to deploy and understand.

The Azure Backup service is one of the more compelling administrative offerings from Microsoft. I would not recommend it as a company’s sole backup product — local backups are still very important, and even more so if time to restore is a crucial metric for the enterprise — but Azure Backup is a worthy addition to a layered backup strategy.

Azure migration takes hostile approach to lure VMware apps

The two biggest public cloud providers have set their sights on VMware workloads, though they’re taking different approaches to accommodate the hypervisor heavyweight and its customers.

A little over a year after Amazon Web Services (AWS) and VMware pledged to build a joint offering to bridge customers’ public and private environments, Microsoft this week introduced a similar service for its Azure public cloud. There’s one important distinction, however: VMware is out of the equation, a hostile move met with equal hostility from VMware, which said it would not support the service.

Azure Migrate offers multiple ways to get on-premises VMware workloads to Microsoft’s public cloud. Customers now can move VMware-based applications to Azure with a free tool to assess their environments, map out dependencies and migrate using Azure Site Recovery. Once there, customers can optimize workloads for Azure via cost management tools Microsoft acquired from Cloudyn.

This approach eschews the VMware virtualization and adapts these applications into a more cloud-friendly architecture that can use a range of other Azure services. A multitude of third-party vendors offer similar capabilities. It’s the other part of the Azure migration service that has drawn the ire of VMware.

VMware virtualization on Azure is a bare-metal subset of Azure Migrate that can run a full VMware stack on Azure hardware. It’s expected to be generally available sometime next year. This offering is a partnership with unnamed VMware-certified partners and VMware-certified hardware, but it notably cuts VMware out of the process, and out of the revenue stream.

In response, VMware criticized Microsoft characterization of the Azure migration service as part of a transition to public cloud. In a blog post, Ajay Patel, VMware senior vice president, cited the lack of joint engineering between VMware and Microsoft and said the company won’t recommend or support the product.

This isn’t the first time these two companies have butted heads. Microsoft launched Hyper-V almost a decade ago with similar aggressive tactics to pull companies off VMware’s hypervisor, said Steve Herrod, who was CTO at VMware at the time. Herrod is currently managing director at venture capital firm General Catalyst.

Part of the motivation here could be Microsoft posturing either to negotiate a future deal with VMware or to ensure it doesn’t lose out on these types of migration, Herrod said. And of course, if VMware had its way, its software stack would be on all the major clouds, he added.

Jeff Kato, analystJeff Kato

VMware on AWS, which became generally available in late August, is operated by VMware, and through the company’s Cloud Foundation program ports its software-defined data centers to CenturyLink, Fujitsu, IBM Cloud, NTT Communications, OVH and Rackspace. The two glaring holes in that swath of partnerships are Azure and Google Cloud, widely considered to be the second and third most popular public clouds behind AWS.

Companies have a mix of applications, some of which are well-suited to transition to the cloud, and others must stay inside a private data center or can’t be re-architected for the cloud. Hence, a hybrid cloud strategy has become an attractive option, and VMware’s recent partnerships have made companies feel more comfortable with the public cloud and curb the management of their own data centers.

“I talk to a lot of CIOs and they love the fact that they can buy VMware and now feel VMware has given them the all-clear to being in the cloud,” Herrod said. “It’s purely promise that they’re not locked into running VMware in their own data center that has caused them to double down on VMware.”

The fact that they have to offer VMware bare metal to accelerate things tells you there are workloads people are reluctant to move to the public cloud, whether that’s on Hyper-V or even AWS.
Jeff Katoanalyst, Taneja Group

VMware virtualization on Azure is also an acknowledgement that some applications are not good candidates for the cloud-native approach, said Jeff Kato, an analyst at Taneja Group in Hopkinton, Mass.

“The fact that they have to offer VMware bare metal to accelerate things tells you there are workloads people are reluctant to move to the public cloud, whether that’s on Hyper-V or even AWS,” he said.

Some customers will prefer VMware on AWS, but it won’t be a thundering majority, said Carl Brooks, an analyst at 451 Research. There’s also no downside for Microsoft to support what customers already do, and the technical aspect of this move is relatively trivial, he added.

“It’s a buyer’s market, and none of the major vendors are going to benefit from trying to narrow user options — quite the opposite,” Brooks said.

Perhaps it’s no coincidence that Microsoft debuted the Azure migration service in the days leading up to AWS’ major user conference, re:Invent, where there is expected to be more talk about the partnership between Amazon and VMware. It’s also notable that AWS is only a public cloud provider, so it doesn’t have the same level of competitive friction as there has been historically between Microsoft and VMware, Kato said.

“Microsoft [is] trying to ride this Azure momentum to take more than their fair share of [the on-premises space], and in order to do that, they’re going to have to come up with a counter attack to VMware on AWS,” he said.

Despite VMware’s lack of support for the Azure migration service, it’s unlikely it can do anything to stop it, especially if it’s on certified hardware, Kato said. Perhaps VMware could somehow interfere with how well the VMware stack integrates with native Azure services, but big enterprises could prevent that, at least for their own environments.

“If the customer is big enough, they’ll force them to work together,” Kato said.

Trevor Jones is a senior news writer with SearchCloudComputing and SearchAWS. Contact him at tjones@techtarget.com.

IT pros get comfortable with Kubernetes in production

IT pros who’ve run production workloads with Kubernetes for at least a year say it can open up frontiers for IT operations within their organizations.

It’s easier to find instances of Kubernetes in production in the enterprise today versus just a year ago. This is due to the proliferation of commercial platforms that package this open source container orchestration software for enterprise use, such as CoreOS Tectonic and Rancher Labs’ container management product, Rancher. In the two years since the initial release of Kubernetes, early adopters said the platform has facilitated big changes in high availability (HA) and application portability within their organizations.

For example, disaster recovery (DR) across availability zones (AZs) in the Amazon Web Services (AWS) public cloud was notoriously unwieldy with VM-based approaches. Yet, it has become the standard for Kubernetes deployments at SAP’s Concur Technologies during the last 18 months.

Concur first rolled out the open source, upstream Kubernetes project in production to support a receipt image service in December 2015, at a time when clusters that spanned multiple AZs for HA were largely unheard-of, said Dale Ragan, principal software engineer for the firm, based in Bellevue, Wash.

“We wanted to prepare for HA, running it across AZs, rather than one cluster per AZ, which is how other people do it,” Ragan said. “It’s been pretty successful — we hardly ever have any issues with it.”

Ragan’s team seeks 99.999% uptime for the receipt image service, and it’s on the verge of meeting this goal now with Kubernetes in production, Ragan said.

Kubernetes in production offers multicloud multi-tenancy

Kubernetes has spread to other teams within Concur, though those teams run multi-tenant clusters based on CoreOS’s Tectonic, while Ragan’s team sticks to a single-tenant cluster still tied to upstream Kubernetes. The goal is to move that first cluster to CoreOS, as well, though the company must still work out licensing and testing to make sure the receipt imaging app works well on Tectonic, Ragan said. CoreOS has prepared for this transition with recent support for the Terraform infrastructure-as-code tool, with which Ragan’s team underpins its Kubernetes cluster.

CoreOS just released a version of Tectonic that supports automated cluster creation and HA failover across AWS and Microsoft Azure clouds, which is where Concur will take its workloads next, Ragan said.

“Using other cloud providers is a big goal of ours, whether it’s for disaster recovery or just to run a different cluster on another cloud for HA,” Ragan said. With this in mind, Concur has created its own tool to monitor resources in multiple infrastructures called Scipian, which it will soon release to the open source community.

Ragan said the biggest change in the company’s approach to Kubernetes in production has been a move to multi-tenancy in newer Tectonic clusters and the division of shared infrastructures into consumable pieces with role-based access. Network administrators can now provision a network, and it can be consumed by developers that roll out Kubernetes clusters without having to grant administrative access to those developers, for example.

In the next two years, Ragan said he expects to bring the company’s databases into the Kubernetes fold to also gain container-based HA and DR across clouds. For this to happen, the Kubernetes 1.7 additions to StatefulSets and secrets management must emerge from alpha and beta versions as soon as possible; Ragan said he hopes to roll out those features before the end of this year.

Kubernetes in production favors services-oriented approach

Dallas-based consulting firm etc.io uses HA across cloud data centers and service providers for its clients, which it helps to deploy containers. During the most recent Amazon outage, etc.io clients had failover between AWS and public cloud providers OVH and Linode through Rancher’s orchestration of Kubernetes clusters, said E.T. Cook, chief advocate for the firm.

“With Rancher, you can orchestrate domains across multiple data centers or providers,” Cook said. “It just treats them all as one giant intranetwork.”

In the next two years, Cook said he expects Rancher will make not just cloud infrastructures, but container orchestration platforms such as Docker Swarm and Kubernetes interchangeable with little effort. He said he evaluates these two platforms frequently because they change so fast. Cook said it’s too soon to pick a winner in the container orchestration market yet, despite the momentum behind Kubernetes in production at enterprises.

Docker’s most recent Enterprise Edition release favors enterprise approaches to software architectures that are stateful and based on permanent stacks of resources. This is in opposition to Kubernetes, which Cook said he sees as geared toward ephemeral stateless workloads, regardless of its recent additions to StatefulSets and access control features.

It’s like the early days of HD DVD vs. Blu-ray … long term, there may be another major momentum shift.
E.T. Cookchief advocate, etc.io

“Much of the time, there’s no functional difference between Docker Swarm and Kubernetes, but they have fundamentally different ways of getting to that result,” Cook said.

The philosophy behind Kubernetes favors API-based service architecture, where interactions between services are often payloads, and “minions” scale up as loads and queues increase, Cook said. In Docker, by contrast, the user sets up a load balancer, which then forwards requests to scaled services.

“The services themselves are first-class citizens, and the load balancers expose to the services — whereas in the Kubernetes philosophy, the service or endpoint itself is the first-class citizen,” Cook said. “Requests are managed by the service themselves in Kubernetes, whereas in Docker, scaling and routing is done using load balancers to replicated instances of that service.”

The two platforms now compete for enterprise hearts and minds, but before too long, Cook said he thinks it might make sense for organizations to use each for different tasks — perhaps Docker to serve the web front-end and Kubernetes powering the back-end processing.

Ultimately, Cook said he expects Kubernetes to find a long-term niche backing serverless deployments for cloud providers and midsize organizations, while Docker finds its home within the largest enterprises that have the critical mass to focus on scaled services. For now, though, he’s hedging his bets.

“It’s like the early days of HD DVD vs. Blu-ray,” Cook said. “Long term, there may be another major momentum shift — even though, right now, the market’s behind Kubernetes.”

Beth Pariseau is senior news writer for TechTarget’s Data Center and Virtualization Media Group. Write to her at bpariseau@techtarget.com or follow @PariseauTT on Twitter.