Tag Archives: growing

Free Kubernetes security tools broaden enterprise choices

Kubernetes security tools have proliferated in 2018, and their growing numbers reflect increased maturity around container security among enterprise IT shops.

The latest additions to this tool category include a feature in Google Kubernetes Engine called Binary Authorization, which can create whitelists of container images and code that are authorized to run on GKE clusters. All other attempts to launch unauthorized apps will fail, and the GKE feature will document them.

Binary Authorization is in public beta. Google will also make the feature available for on-premises deployments through updates to Kritis, an open source project focused on deployment-time policy enforcement.

Aqua Security also added to the arsenal of Kubernetes security tools at IT pros’ disposal with an open source utility, called kube-hunter, which can be used for penetration testing of Kubernetes clusters. The tool performs passive scans of Kubernetes clusters to look for common vulnerabilities, such as dashboard and management server ports that were left open. These seemingly obvious errors have taken down high-profile companies, such as Tesla, Aviva and Gemalto.

Users can also perform active penetration tests with kube-hunter. In this scenario, the tool attempts to exploit the vulnerabilities it finds as if an attacker has gained access to Kubernetes cluster servers, which may highlight additional vulnerabilities in the environment.

Fernando Montenegro, analyst, 451 ResearchFernando Montenegro

These tools join several other Kubernetes security offerings introduced in 2018 — from Docker Enterprise Edition‘s encryption and secure container registry features for the container orchestration platform to Kubernetes support in tools from Qualys and Alert Logic. The growth of Kubernetes security tools indicates the container security conversation has shifted away from ways to secure individual container images and hosts to security at the level of the application and Kubernetes cluster.

“Containers are not foolproof, but container security is good enough for most users at this point,” said Fernando Montenegro, analyst with 451 Research. “The interest in the industry shifts now to how to do security at the orchestration layer and secure broader container deployments.”

GKE throws down the gauntlet for third-party container orchestration tools

The question for users, as cloud providers add these features, is, why go for a third-party tool when the cloud provider does this kind of thing themselves?
Fernando Montenegroanalyst, 451 Research

Google’s Binary Authorization feature isn’t unique; other on-premises and hybrid cloud Kubernetes tools, such as Docker Enterprise Edition, Mesosphere DC/OS and Red Hat OpenShift, offer similar capabilities to prevent unauthorized container launches on Kubernetes clusters.

However, third-party vendors once again find themselves challenged by a free and open source alternative from Google. Just as Kubernetes supplanted other container orchestration utilities, these additional Kubernetes management features further reduce third-party tools’ competitiveness.

GKE Binary Authorization is one of the first instances of a major cloud provider adding such a feature natively in its Kubernetes service, Montenegro said.

“[A gatekeeper for Kubernetes] is not something nobody’s thought of before, but I haven’t seen much done by other cloud providers on this front yet,” Montenegro said. AWS and Microsoft Azure will almost certainly follow suit.

“The question for users, as cloud providers add these features, is, why go for a third-party tool when the cloud provider does this kind of thing themselves?” Montenegro said.

Aqua Security’s penetration testing tool is unlikely to unseat full-fledged penetration testing tools enterprises use, such as Nmap and Burp Suite, but its focus on Kubernetes vulnerabilities specifically with a free offering will attract some users, Montenegro said.

Aqua Security and its main competitor, Twistlock, also must stay ahead of Kubernetes security features as they’re incorporated into broader enterprise platforms from Google, Cisco and others, Montenegro said.

AI bias and data stewardship are the next ethical concerns for infosec

When it comes to artificial intelligence and machine learning, there is a growing understanding that rather than constantly striving for more data, data scientists should be striving for better data when creating AI models.

Laura Norén, director of research at Obsidian Security, spoke about data science ethics at Black Hat USA 2018, and discussed the potential pitfalls of not having quality data, including AI bias learned from the people training the model.

Norén also looked forward to the data science ethics questions that have yet to be asked around what should happen to a person’s data after they die.

Editor’s note: This is part two of our talk with Norén and it has been edited for length and clarity.

What do you think about how companies go about AI and machine learning right now?
 
Laura Norén: I think some of them are getting smarter. At a very large scale, it’s not noise, but you get a lot of data that you don’t really need to store forever. And frankly it costs money to store data. It costs money to have lots and lots and lots of variable features in your model. If you get a more robust model and you’re aware of where your signal is coming from, you may also decide not to store particular kinds of data because it’s actually inefficient at some point.
 
For instance, astronomers have this problem. They’ve been building telescopes that are generating so much data, it cripples the system. They’ve had seven years of planning just to figure out which data to keep, because they can’t keep it all.

There’s a myth out there that in order to develop really great machine learning systems you need to have everything, especially at the outset, when you don’t really know what the predictive features are going to be. It’s nontrivial to do the math and to use the existing data and tests and simulations to figure out what you really need to store and what you don’t need to capture in the first place. It’s part of the hoarding mythology that somehow we need all of the data all of the time for all time for every person.

How does data science ethics relate to issues of AI bias caused by the data that’s fed in?
 
Norén: That is such a great, great question. I absolutely know that it’s going to be important. We’re aware of that, we’re watching for it, we’re monitoring for it so we can test for bias in this case against Russians. Because it’s cybersecurity, that’s a bias we might have. You can test for that kind of thing. And so we’re building tests for those kinds of predictable biases we might have.

I wish I had a great story of how we discovered that we’re biased against Russians or North Koreans or something like that. But I don’t have that yet because it would just be wrong to kind of run into some of the great stories that I’m sure we’re going to run into soon enough.
 
How do you identify what could be an AI bias that you need to worry about when first building the system?

 
Norén: When you have low data or your models are kind of all over the place because it’s the very beginning, you might be able to use social science to help you look for early biases. All of the data that we’re feeding into these systems are generated by humans and humans are inherently biased, that’s how we’ve evolved. That turns out to be really strong, evolutionarily speaking, and then not so great in advanced evolution.
 
You can test for things that you think might have a known bias, which then it helps to know your history. Like I said, in cybersecurity you might worry about being biased specifically against particular regions. So you may have a higher false-positive rate for Russians or for Russian language content or Chinese language content, or something like that. You could specifically test for those because you went in knowing that you might have a bias. It’s a little bit more technical and difficult to unearth biases that you were not expecting. We’re using technical solutions and data social science to try to help surface those.

I think social science has been kind of the sleeper hit in data science. It turns out it really helps if you know your domain really well. In our case, that’s social science because we’re dealing with humans. In other cases, it might help to be a really good biologist if you’re starting to do genomics at a predictive level. In general, the strongest data scientists we see are people who have both very high technical skills in the data science vertical but also deep knowledge of their domain.
 
It sounds like a lot of the potential mitigations for AI bias and data science issues boil down to being more proactive rather than reactive. In that spirit, what is an issue that you think will become a bigger topic of discussion in the next five years?
 
Norén: I do actually think it’s going to be very interesting just how people feel about what happens to their data as more and more companies have more and more data about people forever and their data are going to outlive them. There have been some people who are already working on that kind of thing.
 
Say you have a best friend and your best friend dies, but you have all these emails and chats, texts, back-and-forth with your best friend. Someone is developing a chatbot that mimics your best friend by being trained on all those actual conversations you had and will then live on past your best friend. So you can continue to talk with your best friend even though your best friend is dead. That’s an interesting, kind of provocative, almost artistic take on that point.
 
But I think it’s going to be a much bigger topic of conversation to try to understand what it means to have yourself, profiles and data live out beyond the end of your own life and be able to extend to places that you’re not actually in. It will drive decisions about you that you will have no agency over. The dead best friend has no agency over that chatbot.

Indefinite data storage will become much, much more topical in conversation and we’ll also start to see then why the right to be forgotten is an insufficient response to that kind of thing because it assumes that you know where to go as your agency, or that you even have agency at all. You’re dead; you obviously don’t have any agency. Maybe you should, maybe you shouldn’t. That’s an interesting ethical question.

Users are already finding they don’t always have agency over their data even when alive, aren’t they?
 
Norén: Even if you’re alive, if you don’t really know who holds your data, you may have no agency to get rid of it. I can’t call up Equifax and tell them to delete my data. I’m an American, but I don’t have that. I know they’re stewards of it but there’s nothing I could do about that.

We’ll probably favor conversation a lot more in terms of being good guardians of data rather than talking about it in terms of something that we own or don’t own; it will be about stewardship and guardianship.

We’ll probably favor conversation a lot more in terms of being good guardians of data rather than talking about it in terms of something that we own or don’t own; it will be about stewardship and guardianship. That’s a language that I’m borrowing from medical ethics because they’re using that type of language to deal with DNA.
 
Can someone else own your DNA? They’ve decided no. DNA is such an intrinsic part of a person’s identity and a person’s physicality that it can’t be owned in whole by someone else. But that someone else, like a hospital or a research lab, could take guardianship of it.

The language is out there, but we haven’t really seen it move all the way through the field of data science. It’s kind of stuck over in genomics and the Henrietta Lacks story. She was a woman who had ovarian cancer, and she died. But her cells, her cancer cells, were really robust. They worked really well in research settings and they lived on well past Henrietta’s life. Her family was unaware of this. There’s this beautiful book written about what it means to find out that part of your family — this diseased family member that you cared about a lot — is still alive and is still fueling all this research when you didn’t even know anything about it. That’s kind of where that conversation got started, but I see a lot of parallels there between data science and what people think of when they think of DNA.
 
One of the things that’s so different about data science is that we now can actually have a much more complete record of an individual than we have ever been able to have. It’s not just a different iteration on the same kind of thing. You used to be able to have some sort of dossier on you that has your birthdate and your Social Security number, your name and whether you were married. That’s such a small amount of information compared to every single interaction that you’ve had with a piece of software, with another person, with a communication, every medical record, everything that we might know about your DNA. And our knowledge will continue to get deeper and deeper and deeper as science progresses. And we don’t really know what that’s going to do to the concept of individuality and finiteness.
 
I think about these things very deeply. We’re going to see that in terms of, ‘Wow, what does it mean that your data is so complete and it exists in places and times that you could never exist and will never exist?’ That’s why I think that decay by design thing is so important.

Report: ERP security is weak, vulnerable and under attack

ERP systems are seeing growing levels of attack for two reasons. First, many of these systems — especially in the U.S. — are now connected to the internet. Second, ERP security is hard. These systems are so complex and customized that patching is expensive, complicated and often put off. 

Windows systems are often patched within days, but users may wait years to patch some ERP systems. There are old versions of PeopleSoft and other ERP applications, for instance, that are out-of-date and connected to the internet, according to researchers at two cybersecurity firms, which jointly looked at the risks faced in ERP security.

These large corporate systems, which manage global supply chains and manufacturing operations, could be compromised and shut down by an attacker, said Juan Pablo Perez-Etchegoyen, CTO of Onapsis, a cybersecurity firm based in Boston.

“If someone manages to breach one of those [ERP] applications, they could literally stop operations for some of those big players,” Perez-Etchegoyen said in an interview. His firm, along with Digital Shadows, released a report, “ERP Applications Under Fire: How Cyberattackers Target the Crown Jewels,” which was recently cited as a must-read by the U.S. Computer Emergency Readiness Team within the Department of Homeland Security. This report looked specifically at Oracle and SAP ERP systems.

Warnings of security vulnerabilities are not new

Cybersecurity researchers have been warning for a long time that U.S. critical infrastructure is vulnerable. Much of the focus has been on power plants and other utilities. But ERP systems are managing critical infrastructure, and the report by Onapsis and Digital Shadows is seen backing up a broader worry about infrastructure risks.

“The great risk in ERP is disruption,” said Alan Paller, the founder of SANS Institute, a cybersecurity research and education organization in Bethesda, Md.

If the attackers were just interested in extortion or gaining customer data, there are easier targets, such as hospitals and e-commerce sites, Paller said. What the attackers may be doing with ERP systems is prepositioning, which can mean planting malware in a system for later use.

In other words, attackers “are not sure what they are going to do” once they get inside an ERP system, Paller said. But they would rather get inside the system now, and then try to gain access later, he said.

The report by Onapsis and Digital Shadows found an increase among hackers in ERP-specific vulnerabilities. This interest has been tracked on a variety of sources, including the dark web, which is a part of the internet accessible only through special networks.

Complexity makes ERP security difficult

The complexity of ERP applications makes it really hard and really costly to apply patches.
Juan Pablo Perez-EtchegoyenCTO, Onapsis

The problem facing ERP security, Perez-Etchegoyen said, is “the complexity of ERP applications makes it really hard and really costly to apply patches. That’s why some organizations are lagging behind.”

SAP and Oracle, in emailed responses to the report, both said something similar: Customers need to stay up-to-date on patches.

“Our recommendation to all of our customers is to implement SAP security patches as soon as they are available — typically on the second Tuesday of every month — to protect SAP infrastructure from attacks,” SAP said.

Oracle pointed out that it “issued security updates for the vulnerabilities listed in this report in July and in October of last year. The Critical Patch Update is the primary mechanism for the release of all security bug fixes for Oracle products. Oracle continues to investigate means to make applying security patches as easy as possible for customers.”

One of the problems is knowing the intent of the attackers, and the report cited a full range of motives, including cyberespionage, which is sabotage by a variety of groups, from hacktivists to foreign countries.

Next wave of attacks could be destructive

But one fear is the next wave of major attacks will attempt to destroy or cause real damage to systems and operations.

This concern was something Edward Amoroso, retired senior vice president and CSO of AT&T, warned about.

In a widely cited open letter in November 2017 to then-President-elect Donald Trump, Amoroso said attacks “will shift from the theft of intellectual property to destructive attacks aimed at disrupting our ability to live as free American citizens.” The ERP security report’s findings were consistent with his earlier warning, he said in an email.

Foreign countries know that “companies like SAP, Oracle and the like are natural targets to get info on American business,” Amoroso said. “All ERP companies understand this risk, of course, and tend to have good IT security departments. But going up against military actors is tough.”

Amoroso’s point about the risk of a destructive attack was specifically cited and backed by a subsequent MIT report, “Keeping America Safe: Toward More Secure Networks for Critical Sectors.”  The MIT report warned that attackers enjoy “inherent advantages owing to human fallibility, architectural flaws in the internet and the devices connected to it.”

Microsoft bills Azure network as the hub for remote offices

Microsoft’s foray into the rapidly growing SD-WAN market could solve a major customer hurdle and open Azure to even more workloads.

All the major public cloud platforms have increased their networking functionality in recent months, and Microsoft’s latest service, Azure Virtual WAN, pushes the boundaries of those capabilities. The software-defined network acts as a hub that links with third-party tools to improve application performance and reduce latency for companies with multiple offices that access Azure.

IDC estimates the software-defined wide area network (SD-WAN) market will hit $8 billion by 2021, as cloud computing continues to proliferate and employees must access cloud-hosted workloads from various locations. So far, the major cloud providers have left that work to partners.

But this Azure network service solves a big problem for customers that make decisions about network transports and integration with existing routers, as they consume more cloud resources from more locations, said Brad Casemore, an IDC analyst.

“Now what you’ve got is more policy-based, tighter integration within the SD-WAN,” he said.

Azure Virtual WAN uses a distributed model to link Microsoft’s global network with traditional on-premises routers and SD-WAN systems provided by Citrix and Riverbed. Microsoft’s decision to rely on partners, rather than provide its own gateway services inside customers’ offices, suggests it doesn’t plan to compete across the totality of the SD-WAN market, but rather provide an on-ramp to integrate with third-party products.

Customers can already use various SD-WAN providers to easily link to a public cloud, but Microsoft has taken the level of integration a step further, said Bob Laliberte, an analyst at Enterprise Strategy Group in Milford, Mass. Most SD-WAN vendors are building out security ecosystems, but Microsoft already has that in Azure, for example.

This could also simplify the purchasing process, and it would make sense for Microsoft to eventually integrate this virtual WAN with Azure Stack to help facilitate hybrid deployments, Laliberte said.

It’s unclear if customers trust Microsoft — or any single hyperscale cloud vendor — at the core of their SD-WAN implementation, as their architectures spread across multiple clouds.

The Azure Virtual WAN service is billed as a way to connect remote offices to the cloud, and also to each other, with improved reliability and availability of applications. But that interoffice linkage also could lure more companies to use Azure for a whole host of other services, particularly customers just starting to embrace the public cloud.

There are still questions about the Azure network service, particularly around multi-cloud deployments. It’s unclear if customers trust Microsoft — or any single hyperscale cloud vendor — at the core of their SD-WAN implementation, as their architectures spread across multiple clouds, Casemore said.

Azure updates boost network security, data analytics tools

Microsoft also introduced an Azure network security feature this week, Azure Firewall, with which users can create and enforce network policies across multiple endpoints. A stateful firewall protects Azure Virtual Network resources and maintains high availability without any restrictions on scale.

Several other updates include an expanded Azure Data Box service, still in preview, which provides customers with an appliance onto which they can upload data and ship directly to an Azure data center. These types of devices have become a popular means to speed massive migrations to public clouds. Another option for Azure users, Azure Data Box Disk, uses SSD disks to transfer up to 40 TB of data spread across five drives. That’s smaller than the original box’s 100 TB capacity, and better suited to collect data from multiple branches or offices, the company said.

Microsoft also doubled the query performance of Azure SQL Data Warehouse to support up to 128 concurrent queries, and waived the transfer fee for migrations to Azure of legacy applications that run on Windows Server and SQL Server 2008/2008 R2, for which Microsoft will end support in July 2019. Microsoft also plans to add features to Power BI for ingestions and integration across BI models that are similar to Microsoft customers’ experience with Power Query for Excel.

Reflect adds color to Puppet DevOps tools

Data visualization specialist Reflect enlivens the growing Puppet DevOps tool portfolio, but it’s unclear if Puppet’s wares will catch enterprise customers’ attention in a busy marketplace.

The purchase of Reflect, a startup company based in Portland, Ore., shows that Puppet has little choice but to reinvent itself as containers pull users’ attention away from traditional configuration management, analysts said. Data visualization, a way to portray data so that it’s easily understood by people, will also be increasingly important as microservices architectures expand and IT management complexity skyrockets.

“The ability to paint pretty pictures [of data] is not just a ‘nice to have’ feature,” said Charles Betz, analyst at Forrester Research. “It’s important as microservices become more difficult to visualize and manage.”

Puppet didn’t specify  its plans to integrate Reflect’s software with its Puppet Enterprise, Puppet Discovery and continuous delivery tools, but competitors in DevOps pipeline tools, such as Electric Cloud and XebiaLabs, recently added monitoring and visualization features to illustrate the health of pipelines. It’s a safe bet Puppet DevOps tools must also move in that direction, Betz said.

“Puppet has non-trivial data stores already, a lot of it systems configuration data that’s very close to the metal in Puppet Enterprise’s core data repository,” he said.

Puppet CEO Sanjay MirchandaniSanjay Mirchandani

Puppet lacks a data warehouse or data analytics offering to feed into Reflect’s visual tools, but company CEO Sanjay Mirchandani declined to say whether another acquisition or internal IP will fill in that layer of the architecture.

Containers, infrastructure as code invade configuration management’s turf

Enterprise IT shops are overwhelmed by a wall of marketing noise from vendors that want to be their one-stop shop for DevOps. But one vendor or one tool won’t necessarily solve technical problems in infrastructure automation, said Ernest Mueller, director of engineering operations at AlienVault, an IT security firm based in San Mateo, Calif., which plans to reduce its use of Puppet’s configuration management tools.

“As we move to Docker and immutable infrastructure deployments, our goal is to cut the lines of Puppet code we use in half,” Mueller said. “We’re trying to shift configuration management left — adding it at the end just creates problems, because if you try to do the same configuration operation on a thousand different servers, it’s bound to fail on one of them.”

Mueller monitors upgraded capabilities from vendors such as Chef and Puppet, and is interested in a CI/CD process for infrastructure as code. Puppet’s reusable manifests appeal to Mueller more than Chef’s community-maintained cookbooks, but competitor Chef InSpec’s continuous integration-style security and compliance testing intrigues him for infrastructure code.

Overall, though, infrastructure as code testing and deployment still needs a lot of development, and tools are still emerging to help, Mueller said.

“You can’t just apply an application CI/CD tool to infrastructure code,” he said. “In our application unit tests, for example, the best practice is never to call a public API, but what if the code is creating an Amazon Machine Image? The nature of infrastructure as code means there’s no one answer for CI/CD today, and figuring out how to stitch together multiple tools takes a lot of work, without a good reference architecture.”

We’re more interested in [CI/CD tools] like Netflix’s Spinnaker, which plugs in well to Kubernetes. … Distelli is good for heavy Puppet users, [but] there’s just a proliferation of tools to consider.
Andy Domeierdirector of technology operations, SPS Commerce

Presumably, the Puppet DevOps portfolio means it will expand its CI/CD tools’ integrations and coverage beyond Puppet Enterprise code, but right now Continuous Delivery for Puppet Enterprise doesn’t cover other infrastructure as code tools such as HashiCorp’s Terraform, which Mueller’s shop also uses.

A former Puppet user that switched to Red Hat’s Ansible infrastructure automation tool said despite Puppet’s acquisitions he likely won’t re-evaluate its CI/CD tools.

“We’re more interested in things like Netflix’s Spinnaker, which plugs in well to Kubernetes [for container orchestration],” said Andy Domeier, director of technology operations at SPS Commerce, a communications network for supply chain and logistics businesses based in Minneapolis. Spinnaker is a multi-cloud continuous delivery platform open sourced by the same company that made Chaos Monkey.

“Distelli is good for heavy Puppet users, but I wish it had been around earlier. Now there’s just a proliferation of tools to consider.”

Puppet and Chef face game of DevOps musical chairs

As containers and container orchestration tools begin to replace the need for server-level automation in enterprise data centers, configuration management tool vendors such as Puppet and Chef have refocused on higher-ordered IT infrastructure and application automation. Chef has attacked the space with its homegrown Chef Automate, Chef Habitat and Chef InSpec tools, which add application-focused IT automation to complement the company’s configuration management products. Puppet has expanded its product portfolio through acquisition under Mirchandani, who took over as CEO in 2016. Puppet bought CI/CD and container orchestration vendor Distelli in 2017 and rereleased some of Distelli’s software as Continuous Delivery for Puppet Enterprise, which performs continuous integration testing and continuous deployment tasks for Puppet’s infrastructure as code, in early 2018.

“Puppet hasn’t had much choice but to develop a strategy that moves into some adjacencies — otherwise Kubernetes is an existential threat,” Betz said.

In addition to Chef, Electric Cloud and XebiaLabs, a Puppet DevOps bid must fend off a horde of competitors from Red Hat to Docker to AWS and Microsoft Azure, and all seek revenues in a relatively small market, Betz said. Forrester estimates the total DevOps tools market size at $1 billion, compared to $2 to $3 billion for application performance monitoring, another relatively niche space. Both those markets are dwarfed by the market for IT service management tools, which Forrester estimates to be an order of magnitude bigger.

“It’s a game of musical chairs, and many of those chairs will be suddenly pulled out, especially if the economy even hiccups,” Betz said. “There’s no question this market will further consolidate.”

Kubernetes in Azure eases container deployment duties


With the growing popularity of containers in the enterprise, administrators require assistance to deploy and manage…

“;
}
});

/**
* remove unnecessary class from ul
*/
$(“#inlineregform”).find( “ul” ).removeClass(“default-list”);

/**
* Replace “errorMessageInput” class with “sign-up-error-msg” class
*/
function renameErrorMsgClass() {
$(“.errorMessageInput”).each(function() {
if ($(this).hasClass(“hidden”)) {
$(this).removeClass(“errorMessageInput hidden”).addClass(“sign-up-error-msg hidden”);
} else {
$(this).removeClass(“errorMessageInput”).addClass(“sign-up-error-msg”);
}
});
}

/**
* when validation function is called, replace “errorMessageInput” with “sign-up-error-msg”
* before return
*/
function validateThis(v, form) {
var validateReturn = urValidation.validate(v, form);
renameErrorMsgClass();
return validateReturn;
}

/**
* DoC pop-up window js – included in moScripts.js which is not included in responsive page
*/
$(“#inlineRegistration”).on(“click”,”a.consentWindow”, function(e) {
window.open(this.href, “Consent”, “width=500,height=600,scrollbars=1”);
e.preventDefault();
});

these workloads, particularly in the cloud.

When you consider the growing prevalence of Linux and containers both in Windows Server and in the Azure platform, it makes sense for administrators to get more familiar with how to work with Kubernetes in Azure.

Containers help developers streamline the coding process, while orchestrators give the IT staff a tool to deploy these applications in a cluster. One of the more popular tools, Kubernetes, automates the process of configuring container applications within and on top of Linux across public, private and hybrid clouds.

For companies that prefer to use Azure for container deployments, Microsoft developed the Azure Kubernetes Service (AKS), a hosted control plane, to give administrators an orchestration and cluster management tool for its cloud platform.

Why containers and why Kubernetes?

There are many advantages to containers. Because they share an operating system, containers are lighter than virtual machines (VMs). Patching containers is less onerous than it is for VMs; the administrator just swaps out the base image.

On the development side, containers are more convenient. Containers are not reliant on underlying infrastructure and file systems, so they can move from operating system to operating system without issue.

Kubernetes makes working with containers easier. Most organizations choose containers because they want to virtualize applications and produce them quickly, integrate them with continuous delivery and DevOps style work, and provide them isolation and security from each other.

For many people, Kubernetes represents a container platform where they can run apps, but it can do more than that. Kubernetes is a management environment that handles compute, networking and storage for containers.

Kubernetes acts as much as a PaaS provider as an IaaS, and it also deftly handles moving containers across different platforms. Kubernetes organizes clusters of Linux hosts that run containers, turns them off and on, moves them around hosts, configures them via declarative statements and automates provisioning.

Using Kubernetes in Azure

Clusters are sets of VMs designed to run containerized applications. A cluster holds a master VM and agent nodes or VMs that host the containers.

Microsoft calls AKS self-healing, which means the platform will recover from infrastructure problems automatically.

AKS limits the administrative workload that would be required to run this type of cluster on premises. AKS shares the container workload across the nodes in the cluster and redistributes resources when adding or removing nodes. Azure automatically upgrades and patches AKS.

Microsoft calls AKS self-healing, which means the platform will recover from infrastructure problems automatically. Like other cloud services, Microsoft only charges for the agent pool nodes that run.

Starting up Kubernetes in Azure

The simplest way to provision a new instance of an AKS cluster is to use Azure Cloud Shell, a browser-based command-line environment for working with Azure services and resources.

Azure Cloud Shell works like the Azure CLI, except it’s updated automatically and is available from a web browser. There are many service provider plug-ins enabled by default in the shell.

Azure Cloud Shell session
Starting a PowerShell session in the Azure Cloud Shell

Open Azure Cloud Shell at shell.azure.com. Choose PowerShell and sign in to the account with your Azure subscription. When the session starts, complete the provider registration with these commands:

az provider register -n Microsoft.Network
az provider register -n Microsoft.Storage
az provider register -n Microsoft.Compute
az provider register -n Microsoft.ContainerService

[embedded content]

How to create a Kubernetes cluster on Azure

Next, create a resource group, which will contain the Azure resources in the AKS cluster.

az group create –name AKSCluster –location centralus

Use the following command to create a cluster named AKSCluster1 that will live in the AKSCluster resource group with two associated nodes:

az aks create –resource-group AKSCluster –name AKSCluster1 –node-count 2 –generate-ssh-keys

Next, to use the Kubernetes command-line tool kubectl to control the cluster, get the necessary credentials:

az aks get-credentials –resource-group AKSCluster –name AKSCluster1

Next, use kubectl to list your nodes:

kubectl get nodes

Put the cluster into production with a manifest file

After setting up the cluster, load the applications. You’ll need a manifest file that dictates the cluster’s runtime configuration, the containers to run on the cluster and the services to use.

Developers can create this manifest file along with the appropriate container images and provide them to your operations team, who will import them into Kubernetes or clone them from GitHub and point the kubectl utility to the relevant manifest.

To get more familiar with Kubernetes in Azure, Microsoft offers a tutorial to build a web app that lets people vote for either cats or dogs. The app runs on a couple of container images with a front-end service.

Midmarket enterprises push UCaaS platform adoption

Cloud unified communications adoption is growing among midmarket enterprises as they look to improve employee communication, productivity and collaboration. Cloud offerings, too, are evolving to meet midmarket enterprise needs, according to a Gartner Inc. report on North American midmarket unified communications as a service (UCaaS).

Gartner, a market research firm based in Stamford, Conn., defines the midmarket as enterprises with 100 to 999 employees and revenue between $50 million and $1 billion. UCaaS spending in the midmarket segment reached nearly $1.5 billion in 2017 and is expected to hit almost $3 billion by 2021, according to the report. Midmarket UCaaS providers include vendors ranked in Gartner’s UCaaS Magic Quadrant report. The latest Gartner UCaaS midmarket report, however, examined North American-focused providers not ranked in the larger Magic Quadrant report, such as CenturyLink, Jive and Vonage.

But before deploying a UCaaS platform, midmarket IT decision-makers must evaluate the broader business requirements that go beyond communication and collaboration.

Evaluating the cost of a UCaaS platform

The most significant challenge facing midmarket IT planners over the next 12 months is budget constraints, according to the report. These constraints play a major role in midmarket UC decisions, said Megan Fernandez, Gartner analyst and co-author of the report.

“While UCaaS solutions are not always less expensive than premises-based solutions, the ability to acquire elastic services with straightforward costs is useful for many midsize enterprises,” she said.

Many midmarket enterprises are looking to acquire UCaaS functions as a bundled service rather than stand-alone functions, according to the report. Bundles can be more cost-effective as prices are based on a set of features rather than a single UC application. Other enterprises will acquire UCaaS through a freemium model, which offers basic voice and conferencing functionality.

“We tend to see freemium services coming into play when organizations are trying new services,” she said. “Users might access the service and determine if the freemium capabilities will suffice for their business needs.”

For some enterprises, this basic functionality will meet business requirements and offer cost savings. But other enterprises will upgrade to a paid UCaaS platform after using the freemium model to test services.

Cloud adoption
Enterprises are putting more emphasis on cloud communications services.

Addressing multiple network options

Midmarket enterprises have a variety of network configurations depending on the number of sites and access to fiber. As a result, UCaaS providers offer multiple WAN strategies to connect to enterprises. Midmarket IT planners should ensure UCaaS providers align with their companies’ preferred networking approach, Fernandez said.

Enterprises looking to keep network costs down may connect to a UCaaS platform via DSL or cable modem broadband. Enterprises with stricter voice quality requirements may pay more for an IP MPLS connection, according to the report. Software-defined WAN (SD-WAN) is also a growing trend for communications infrastructure. 

“We expect SD-WAN to be utilized in segments with requirements for high QoS,” Fernandez said. “We tend to see more requirements for high performance in certain industries like healthcare and financial services.”

Team collaboration’s influence and user preferences

Team collaboration, also referred to as workstream collaboration, offers similar capabilities as UCaaS platforms, such as voice, video and messaging, but its growing popularity won’t affect how enterprises buy UCaaS, yet.

Fernandez said team collaboration is not a primary factor influencing UCaaS buying decisions as team collaboration is still acquired at the departmental or team level. But buying decisions could shift as the benefits of team-oriented management become more widely understood, she said.

“This means we’ll increasingly see more overlap in the UCaaS and workstream collaboration solution decisions in the future,” Fernandez said.

Intuitive user interfaces have also become an important factor in the UCaaS selection process as ease of use will affect user adoption of a UCaaS platform. According to the report, providers are addressing ease of use demands by trying to improve access to features, embedding AI functionality and enhancing interoperability among UC services.

Comparing the leading mobile device management products

The mobile device management space is growing at a rapid pace, and MDM is widely used across the enterprise to manage and secure smartphones and tablets. Investing in this technology enables organizations to not just secure mobile devices themselves, but the data on them and the corporate networks they connect to, as well.

The market for MDM software is saturated now, and there are new vendors arriving in this vertical on a consistent basis. Many of the larger names in mobile security, meanwhile, have been buying up smaller vendors and integrating their technology into their mobile management offerings, while others have remained pure mobile device management companies from the beginning. So what are the best mobile device management products available today?

Since the mobile security market has become so crowded, it is harder than ever to determine what the best mobile device management products are for an organization’s environment.

To make choosing easier for readers, this article evaluates five leading EMM companies offering MDM as a part of their bundles and their products against the most important criteria to consider when procuring and deploying mobile security in the enterprise. These criteria include MDM implementation, app integration, containerization vs. non-containerization, licensing models and policy management. The mobile management vendors covered are Good Technology Inc., VMware AirWatch, MobileIron Inc., IBM MaaS360, Sophos and Citrix.

That being said, there are also niche players — such as BlackBerry — that are attempting to move into the broader MDM market outside of just securing and managing their own hardware, in addition to free offerings from the likes of Google that have attempted to compete with the above list of MDM vendors by providing tools to assist in Android device management. Even Microsoft has a small amount of MDM built into its operating systems to manage mobile devices.

Today, the vast majority of mobile devices in use — both smartphones and tablets — run on either Apple’s iOS or Google’s Android OS. So while many of today’s MDM products are also capable of managing Windows Phones, BlackBerry devices and so on, this article focuses mostly on their Apple and Android management and security capabilities.

Selecting the best mobile device management product for your organization isn’t easy. By using the criteria presented in this feature and asking six crucial questions before buying MDM software, an organization will find it easier to procure the right mobile management and security products to satisfy its enterprise needs.

Criteria #1: Implementation of MDM

Organizations should understand and plan out their mobile device deployment and MDM requirements before looking at vendors. The installation criteria for MDM are normally based on a few things: resources, money and hardware. With that being said, there are two distinct installation possibilities when deploying an MDM product.

The first is an on-premises implementation that needs dedicated resources, both from a hardware and technical perspective, to assist with installing the system or application on a network. Vendors like Good Technology with it’s Good For Enterprise suite require the installation of servers within an organization’s DMZ. This will necessitate firewall changes and operating system resources to implement.

These systems will then need to be managed appropriately to verify that they’re consistently patched and scanned for vulnerabilities, among other issues. In essence, this type of MDM deployment is treated as an additional server on an organization’s network.

It’s possible that a smaller business might shy away from an install of this nature due to the requirements and technical know-how it would take to get off the ground. On the other hand, if businesses are able to manage this type of mobile management and security product, it gives them complete ownership of these systems and the data that’s on them.

The second installation type is a cloud-based service that enables an off premises installation of MDM, removing any concerns regarding management, technical resources and hardware. Vendors like VMware AirWatch and Sophos have the ability to let customers provision their entire MDM product in the cloud and manage the system from any internet connection. This is both a pro and a con: It provides companies with resource constraints — like not having the experience or headcount — with the ability to get an MDM product set up quickly, but it does so at the risk of having data reside outside the complete control of these organizations — within the cloud.

Depending on an organization’s resource availability, technical experience and risk appetite, these are the two options — on-premises and cloud — currently available for installing MDM.

Criteria #2: App integration

Apps are a major reason mobile device popularity and demand has increased exponentially over the years. Without the ability to have apps work properly and yet securely, the power of mobile devices and the ability for users to take full advantage of these tools becomes severely limited.

MDM companies have realized this need for functionality and security, so they’ve created business-grade apps that enable productivity without compromising the integrity of mobile devices, the data on them and the networks to which they connect.

Citrix XenMobile has created XenMobile Apps that are tied together and save data in a secure sandbox on mobile devices, so users don’t need to use unapproved apps to send business data to potentially insecure apps out of an enterprise’s control. The sandboxing technology works by securing, and even at times partitioning, the MDM app separately from the rest of the mobile OS — essentially isolating it from the rest of the device, while allowing a user to have the ability to work securely and efficiently.

There are also third-party app vendors that MDM vendors have partnered with to create branded apps. Good Technology has, for example, partnered with many large vendors to accommodate the need to use their apps with a specific MDM environment. This integration between vendors is extremely helpful and adds to the synergy between both vendors to create better security and more productive users. Sophos also allows this with their Secure Workspace feature, which enables users to access files within a container while securing the access to these documents.

Whether you’re using apps created by an MDM vendor for additional security, or apps that have been developed through the collaboration of an MDM vendor and a third-party vendor, it’s important to know that most of the work on a mobile device is done via these apps, and securing the data that flows through them and is created on them is important.

Criteria #3: Container vs. non-container

There are two major operational options available when researching MDM products: MDM that uses the container approach and MDM that uses the non-container approach. This is a major decision that needs to be made before selecting a mobile management product, as most vendors only subscribe to one of these methods.

This decision, whether to go with the container or non-container method of mobile management, will guide the device policy, app installation policy, BYOD plans and data security for the mobile devices that an organization is looking to manage.

A containerized approach is one that keeps all the data and access to corporate resources contained within an app on a mobile device. This app normally won’t allow access to the app from outside the mobile device and vice versa.

Both the Good for Enterprise suite and MaaS360 offer MDM products that enable customers to use a containerized approach. Large companies tend to benefit from this approach — as do government agencies and financial institutions — as it tends to offer the highest degree of protection for sensitive data.

Once a container is removed from a mobile device, all organizational data is gone, and the organization can be sure there was no data leakage onto the mobile device

In contrast to the restricted tactic used by containerization, the non-container approach creates a more fluid and seamless user experience on mobile devices. Companies like VMware AirWatch, Sophos and MobileIron are the leaders in this approach, which enables security on mobile devices via policies and integrated apps. This means these systems rely on pushing policies to the native OS to control their mobile devices. They also support multiple integrated apps — supplied by trusted vendors the MDM companies have partnered with — that assist in adding an additional layer of security to their data. These companies also allow the use of containers and help bridge the gap between customer needs.

Many organizations, including startups and those in retail, lean toward the non-container approach for mobile management and security due to the speed and native familiarity that end users already have with their mobile devices — with OS-bundled calendaring and mail apps, for example. However, keep in mind, in order to completely secure all the data on mobile devices, the non-container approach requires the aforementioned tight MDM policies and integrated apps to enforce the protection of data.

Criteria #4: License models

The licensing model for MDMs has changed slightly in recent years. In the past, there was only a per device license model, which meant organizations were pushed into using licensing models that weren’t very effective for them financially. Due to the emergence of tablets and users carrying multiple smartphones, there became the need to have a license model based on the user — and not the individual device.

All the MDM products covered in this article offer similar, if not identical, pricing models. MDM vendors have listened to the customers and realized that end users in this day and age don’t always have one device. Which licensing model an organization chooses — per device or user based — depends on the company’s mobile device inventory.

The per-device model normally works well for small companies. In this model, every user gets a device that counts against the organization’s total license count. If a user has three devices, all of these go against the total license count of the business. These licenses are normally cheaper per seat, but can quickly become expensive if there are multiple devices per user requiring coverage.

The user-based pricing model, by contrast, takes into account the need for users to have multiple devices that all require MDM coverage. With this model, the user name is the basis of the license, and the user can have multiple devices attached to his one license. This is the reason many larger organizations lean toward this model, or at least a hybrid approach of the two licensing models — to account for users who use multiple mobile devices.

MDM criteria #5: Policy management

This is an important feature of mobile device management, and one that organizations need to review with either a request for proposal (RFP) or something that outlines the details of what mobile device policies they require. Mobile policies enable organizations to make granular changes to a mobile device to limit certain features — the camera and apps, among others — push wireless networks, create VPN tunnels and whitelist apps. This is the nuts and bolts of MDM, and a criterion that should be reviewed heavily during the proof-of-concept stage with specific vendors.

This ability to push certain features of a policy to mobile devices is certainly required, as is the ability to wipe devices remotely if the need occurs should they be lost or stolen. While all the MDM products covered in this article provide the ability to remotely wipe mobile devices, in the case of Good for Enterprise and IBM MaaS360, organizations have the option to wipe mobile devices completely or to just remove the container.

Also important for MDM products is the ability to perform actions such as VPN connections, wireless network configurations and certificate installs, which AirWatch can accomplish. Sophos also offers the ability to manage policies from a security perspective by enforcing antiphishing, antimalware and web protection.

You must assert these options in an RFP beforehand to determine what part of the mobile device policy you’re looking to secure. Evaluating what policy changes you can push to a mobile device and what functions an organization might want to see within a policy will help provide insight for an educated decision on the best mobile device management products.

Most times there will be multiple policies created that allow certain users to receive a particular policy, while allowing someone with other needs to receive a completely different MDM policy. This is a standard function within all MDMs, but it should be understood that a single policy for all users is not always plausible.

Finding the best mobile device management product for you

There are many vendors in this saturated market, but following these five criteria should assist organizations in narrowing the field down to find the best mobile device management products available today. There is much overlap between vendors, but finding the right one that can secure an organization’s data completely and offer full coverage, with the ability to manage all the aspects needed in a policy, is what businesses should be aiming for in MDM products.

Many large companies, especially those in the financial or government sector, are running Good for Enterprise due to the extra layer of security it provides by leveraging a container and integrated apps developed by vendors with whom they partnered.

There is much overlap between vendors, but finding the right one that can secure an organization’s data completely and offer full coverage, with the ability to manage all the aspects needed in a policy, is what businesses should be aiming for in MDM products.

IBM MaaS360, on the other hand, offers both a container and non-container approach to mobile security and management, which makes it suitable for larger enterprises that require some flexibility in terms of operational method deployment. This gives IBM MaaS360 the ability to play to both sides and gives them some leverage over competitors by attracting customers from both mindsets.

Many midsize companies don’t have to meet the level of security imposed by large financial clients, though, and thus aren’t running to boost their mobile device security. We’ve seen that, many times, compliance will bring an extra layer of required security, however, thereby making these organizations more conscience at times about securing data on mobile devices.

Midsize to large companies — those outside of the financial sector — tend to run AirWatch, Sophos or MobileIron MDM due to their abilities to keep the native feel of mobile devices intact, while being able to push custom policies that secure mobile devices to the clients.

As for app integration, Citrix has performed very well in this area with XenMobile, having shown that it’s pushing the boundaries of this area. These apps are selling points to many customers who want to integrate their data onto a mobile device, but want the flexibility to manage the data these mobile apps are consuming. By dispensing these approved apps to managed mobile devices and writing policy for their data to be used on these apps, MDM products, such as Citrix’s, assist with adding an extra layer of data control for the company and ease of use for the user.

As mobile devices become more indispensible for business users, the MDM market will keep expanding in response to the growing need for mobile security. 

SD-WAN a tool for combining networks, engineer says

It’s a good thing to work at a growing company. But it can be a bad thing when that growing company acquires another, and you’re the one charged with combining networks into a cohesive whole.

Ethan Banks, writing in PacketPushers, said the pressure to integrate applications and systems can be intense, leading engineers to cobble together quick-and-dirty options to keep the data flowing. But those options — say, a quick-and-dirty IPsec tunnel — can cause headaches later on.

Yet, there might be another approach to ease the pain associated with combining networks: SD-WAN. Software-defined WAN can be the glue engineers are looking for, Banks said. Among the technology’s advantages, it’s easily managed, offers redundant connectivity and it supports the Interior Gateway Protocol, including the use of a dynamic multipoint virtual private network. In addition, Banks said SD-WAN permits network segmentation and service chaining. Banks also listed some caveats, among them cost and complexity.

Still, he said, “I see SD-WAN as a way to onboard an acquired network permanently, while retaining the fast time to connect that an IPsec tunnel offers. For organizations who already have an SD-WAN solution in place, there’s not much to think about. For organizations who haven’t invested in SD-WAN yet, this might be an additional driver to do so.”

Read what else Banks has to say about using SD-WAN as a tool for combining networks.

Juniper’s embrace of automation and what to expect

Dan Conde, an analyst with Enterprise Strategy Group in Milford, Mass., said he expects Juniper Networks to use its annual conference to shed more light on its Self-Driving Networks initiative. The company last week released details about a trio of bot apps engineered to automate telemetry, auditing and peer monitoring, which will be released early next year.

Conde said he believes this is just the start. “Juniper has been an advocate of automation for a while,” he said, citing first-generation devices that relied on APIs instead of command-line interfaces to program them. “Automation is nothing new to them.”

What is new is layering intelligence to automation, giving the software the ability to adjust network performance as needed. Conde said it’s immaterial what role the intelligence is used for — whether it’s to check configuration or status. What is important is automating as many processes as possible.

“I look forward to a day when even more items get automated, and when IT pros will someday leave behind their skepticism and conservatism on automation and embrace timesavers that make their lives easier,” Conde said.

Dig deeper into Conde’s thoughts about Juniper’s strategy.

What Google’s nifty chip may say about AI

So, Google’s nifty AlphaZero computer algorithm not only learned how to play chess in four hours, it went on to demolish Stockfish — known as the highest-rated chess computer — in a match of the gadgets.

After 100 games, it was AlphaZero 28-0 over Stockfish, with 72 draws, said Brad Shimmin, research director at Current Analysis Inc. in Sterling, Va., in a recent post. But it wasn’t about the game. Instead, it was about the chips.

Google engineered AlphaZero with 5,000 AI-specific TensorFlow processing units (TPUs), which the machine used to “learn” how to play chess. The machine also had 64 second-generation TPUs that provided the necessary neural training. Once the games began, Google stripped AlphaZero down to only 4 TPUs, which is all the machine needed to defeat Stockfish.

“AlphaZero’s mastery of chess stemmed from the sheer, brute force of Google’s AI-specific TRUs,” Shimmin said, adding that each TRU can deliver up to 225,000 predictions per second. A conventional CPU, by contrast, can only churn out 5,000 predictions per second. “It is this hardware-driven ability to iteratively learn at speed that unlocks the door to AI’s potential,” Shimmin said. “That’s where we’ll see the most innovation and competition over the coming year as vendors speed up AI through purpose-build hardware.”

Check in with Shimmin to read more about what Google is trying to do.