Tag Archives: Management

SaltStack infrastructure-as-code tools seek cloud-native niche

As IT automation evolves to suit the cloud-native era, earlier approaches such as configuration management and infrastructure-as-code tools must also change with the times.

SaltStack infrastructure-as-code tools, along with products from competitors such as Puppet, Chef and Ansible, must accommodate fresh IT trends, from AI to immutable infrastructure. They must also do so quickly to keep up with the rapid pace of innovation in the IT industry, something SaltStack has failed to do in recent years, company officials acknowledge.

“[Our new approach] will be an accelerant that allows us to create [new products] much more quickly than we have in the past, and in a much more maintainable way,” said Salt open source creator Thomas Hatch, who is also CTO of SaltStack, the project’s commercial backer.

That new approach is an overhauled software development process based on the principles of plugin-oriented programming (POP), first introduced to core SaltStack products in 2018. This week, the company also renewed its claim in cloud-native territory with three new open source modules developed using POP that will help it keep pace with rivals and emerging technologies, Hatch said.

The modules are Heist, which creates “dissolvable” infrastructure-as-code execution agents to better serve ephemeral apps; Umbra, which automatically links IT data streams to AI and machine learning services; and Idem, a redesigned data description language based in YAML that simplifies the enforcement of application state.

Salt open source contributors say POP has already sped up the project’s development, where previously they faced long delays between code contributions and production-ready inclusion in the main Salt codebase.

“I’m the largest contributor of Azure-specific code to the Salt open source project, and I committed the bulk of that code at the beginning of 2017,” said Nicholas Hughes, founder and CEO of IT automation consulting firm EITR Technologies in Sykesville, Md., which is also a licensed reseller of SaltStack’s commercial product.  “It was accepted into the developer branch at that point. It just showed up in the stable branch at the beginning of 2019, nearly two years later.”

The new modules, especially Idem, can also be used to modernize Salt, especially its integrations with cloud service providers, Hughes said.

SaltStack plugin-oriented programming
SaltStack rewrote its infrastructure-as-code tools with plugin-oriented programming, instead of a traditional object-oriented method.

SaltStack revs update engine with POP and Idem

SaltStack’s Hatch introduced the POP method three years ago. This approach is a faster, more flexible alternative to the more traditional object-oriented programming method developers previously used to maintain the project, Hatch said.

“Object-oriented programming [puts] functions and data right next to each other, and the result is … a lot of isolated silos of code and data,” he said. “Then you end up building custom scaffolding to get them all to communicate with each other, which means it can become really difficult to extend that code.”

Plugin-oriented programming, by contrast, is based on small modules that can be developed separately and merged quickly into a larger codebase.

The new modules rolled out this week serve as a demonstration of how much more quickly the development of Salt and SaltStack infrastructure-as-code tools can move using POP, Hatch said. While an earlier project, Salt SSH, took one engineer two months to create a minimum viable product, and another six months to polish, Heist took one engineer a week and a half to stand up and another two weeks to refine, he said.

Similar open source projects that maintain infrastructure-as-code tools, such as HashiCorp’s Terraform, had long since broken up their codebases into more modular pieces to speed development, Hughes said. He also contributes Azure integration code to Terraform’s open source community.

[Idem and POP] will allow us to move and iterate and build out [the codebase] much more easily.
Nicholas HughesCEO, EITR Technologies

Now, Hughes said he has high hopes for Idem as a vehicle to further modernize cloud provider integrations in open source Salt, and he has already ported all the Azure code he wrote for Salt into Idem using POP.

“It will allow us to move and iterate and build out those codebases much more easily, and version and handle them separately,” he said. He’d also like to see Salt’s open source AWS integrations updated to work with Idem, as well as Salt functions such as the event bus, which ties in with third-party APIs to orchestrate CI/CD and IT monitoring systems alongside infrastructure.

As the cloud working group captain for the Salt open source project, Hughes said he’s put out a call for the community to port more cloud components into Idem, but that’s still a work in progress

Infrastructure-as-code tools ‘reaching the end of their run?’

In the meantime, the breakneck pace of cloud-native technology development waits for no one, and most of SaltStack’s traditional competitors in infrastructure-as-code tools, such as Puppet, Chef and Ansible, have a head start in the race to reinvent themselves.

Puppet has sought a foothold in CI/CD tools with its Distelli acquisition and moved into agentless IT automation, similar to Ansible’s, with Puppet Bolt. Chef overhauled its Ruby codebase using Rust to create the Chef Habitat project years ahead of SaltStack’s POP, in 2015, and expanded into IT security and compliance with Chef InSpec, which rolled out in version 1.0 in 2016.

SaltStack plans to refocus its business primarily on cloud security automation, which Hatch said accounts for 40 percent of the company’s new sales in 2019. It began that expansion in late 2018, but SaltStack has some potential advantages over Chef InSpec, since it can automate security vulnerability remediation without relying on third-party tools, and the company also beat Red Hat Ansible to the security automation punch, which Ansible began in earnest late last year.

Still, Ansible also has the cachet of its IBM/Red Hat backing and well-known network automation prowess.

HashiCorp’s Terraform has a long lead over Salt’s Idem-based POP modules in cloud provisioning integrations, and the company has hot projects to sustain it in other areas of IT, including cloud security, such as Vault secrets management.

“SaltStack seems to be the slowest to redefine themselves, and they’re the smallest [among their competitors], in my view,” said Jim Mercer, analyst at IDC. “The Umbra plugin that could pull them through into the hot area of AI and machine learning certainly isn’t going to hurt them, but there’s only so much growth left here.” A SaltStack spokesperson expressed disagreement with Mercer’s characterization of the company.

As container orchestration tools such as Kubernetes have risen in popularity, they’ve encroached on the traditional configuration management turf of vendors such as SaltStack, Puppet and Chef, though infrastructure-as-code tools such as Terraform remain widely used to automate cloud infrastructure under Kubernetes and to tie in to GitOps workflows.

Still, the market for infrastructure-as-code tools has also begun to erode, in Mercer’s view, with the growth of function-as-a-service products such as AWS Lambda and serverless container approaches such as AWS Fargate that eliminate infrastructure management below the application container level. Even among shops that still manage infrastructure under Kubernetes, fresh approaches to IT automation have begun to horn in on infrastructure as code’s turf, such as Kubernetes Helm, Kubernetes Operators and KUDO Operators created by D2iQ, formerly Mesosphere.

“These tools had their heyday, but they’re reaching the end of their run,” Mercer said. “They’re still widely used for existing apps, but as new cloud-native apps emerge, they’ll start to go the way of the VCR.”

Go to Original Article
Author:

How to Use Azure Arc for Hybrid Cloud Management and Security

Azure Arc is a new hybrid cloud management option announced by Microsoft in November of 2019. This article serves as a single point of reference for all things Azure Arc.

According to Microsoft’s CEO Satya Nadella, “Azure Arc really marks the beginning of this new era of hybrid computing where there is a control plane built for multi-cloud, multi-edge” (Microsoft Ignite 2019 Keynote at 14:40). That is a strong statement from one of the industry leaders in cloud computing, especially since hybrid cloud computing has already been around for a decade. Essentially Azure Arc allows organizations to use Azure’s management technologies (“control plane”) to centrally administer public cloud resources along with on-premises servers, virtual machines, and containers. Since Microsoft Azure already manages distributed resources at scale, Microsoft is empowering its users to utilize these same features for all of their hardware, including edge servers. All of Azure’s AI, automation, compliance and security best practices are now available to manage all of their distributed cloud resources, and their underlying infrastructure, which is known as “connected machines.” Additionally, several of Azure’s AI and data services can now be deployed on-premises and centrally managed through Azure Arc, enhancing local and offline management and offering greater data sovereignty. Again, this article will provide an overview of the Azure Arc technology and its key capabilities (currently in Public Preview) and will be updated over time.

Video Preview of Azure Arc

Contents

Getting Started with Azure Arc

Azure Services with Azure Arc

Azure Artificial Intelligence (AI) with Azure Arc

Azure Automation with Azure Arc

Azure Cost Management & Billing with Azure Arc

Azure Data Services with Azure Arc

Cloud Availability with Azure Arc

Azure Availability & Resiliency with Azure Arc

Azure Backup & Restore with Azure Arc

Azure Site Recovery & Geo-Replication with Azure Arc

Cloud Management with Azure Arc

Management Tools with Azure Arc

Managing Legacy Hardware with Azure Arc

Offline Management with Azure Arc

Always Up-To-Date Tools with Azure Arc

Cloud Security & Compliance with Azure Arc

Azure Key Vault with Azure Arc

Azure Monitor with Azure Arc

Azure Policy with Azure Arc

Azure Security Center with Azure Arc

Azure Advanced Threat Protection with Azure Arc

Azure Update Management with Azure Arc

Role-Based Access Control (RBAC) with Azure Arc

DevOps and Application Management with Azure Arc

Azure Kubernetes Service (AKS) & Kubernetes App Management with Azure Arc

Other DevOps Tools with Azure Arc

DevOps On-Premises with Azure Arc

Elastic Scalability & Rapid Deployment with Azure Arc

Hybrid Cloud Integration with Azure Arc

Azure Stack Hub with Azure Arc

Azure Stack Edge with Azure Arc

Azure Stack Hyperconverged Infrastructure (HCI) with Azure Arc

Managed Service Providers (MSPs) with Azure Arc

Azure Lighthouse Integration with Azure Arc

Third-Party Integration with Azure Arc

Amazon Web Services (AWS) Integration with Azure Arc

Google Cloud Platform (GCP) Integration with Azure Arc

IBM Kubernetes Service Integration with Azure Arc

Linux VM Integration with Azure Arc

VMware Cloud Solution Integration with Azure Arc

Getting Started with Azure Arc

The Azure Arc public preview was announced in November 2019 at the Microsoft Ignite conference to much fanfare. In its initial release, many fundamental Azure services are supported along with Azure Data Services. Over time, it is expected that a majority of Azure Services will be supported by Azure Arc.

To get started with Azure Arc, check out the following guides and documentation provided by Microsoft.

Additional information will be added once it is made available.

Azure Services with Azure Arc

One of the fundamental benefits of Azure Arc is the ability to bring Azure services to a customer’s own datacenter. In its initial release, Azure Arc includes services for AI, automation, availability, billing, data, DevOps, Kubernetes management, security, and compliance. Over time, additional Azure services will be available through Azure Arc.

Azure Artificial Intelligence (AI) with Azure Arc

Azure Arc leverages Microsoft Azure’s artificial intelligence (AI) services, to power some of its advanced decision-making abilities learned from managing millions of devices at scale. Since Azure AI is continually monitoring billions of endpoints, it is able to perform tasks that can only be achieved at scale, such as identifying an emerging malware attack. Azure AI improves security, compliance, scalability and more for all cloud resources managed by Azure Arc. The services which run Azure AI are hosted in Microsoft Azure, and in disconnected environments, much of the AI processing can run on local servers using Azure Stack Edge.

For more information about Azure AI visit https://azure.microsoft.com/en-us/overview/ai-platform.

Azure Automation with Azure Arc

Azure Automation is a service provided by Azure that automates repetitive tasks which can be time-consuming or error-prone. This saves the organization significant time and money while helping them maintain operational consistency. Custom automation scripts can get triggered by a schedule or an event to automate servicing, track changes, collect inventory and much more. Since Azure Automation uses PowerShell, Python, and graphical runbooks, it can manage diverse software and hardware that supports PowerShell or has APIs. With Azure Arc, any on-premises connected machines and the applications they host can be integrated and automated with any Azure Automation workflow. These workflows can also be run locally on disconnected machines.

For more information about Azure Automation visit https://azure.microsoft.com/en-in/services/automation.

Azure Cost Management & Billing with Azure Arc

Microsoft Azure and other cloud providers use a consumption-based billing model so that tenants only pay for the resources which they consume. Azure Cost Management and Billing provides granular information to understand how cloud storage, network, memory, CPUs and any Azure services are being used. Organizations can set thresholds and get alerts when any consumer or business unit approaches or exceeds their limits. With Azure Arc, organizations can use cloud billing to optimize and manage costs for their on-premises resources also. In addition to Microsoft Azure and Microsoft hybrid cloud workloads, all Amazon AWS spending can be integrated into the same dashboard.

For more information about Azure Cost Management and Billing visit https://azure.microsoft.com/en-us/services/cost-management.

Azure Data Services with Azure Arc

Azure Data Services is the first major service provided by Azure Arc for on-premises servers. This was the top request of many organizations which want the management capabilities of Microsoft Azure, yet need to keep their data on-premises for data sovereignty. This makes Azure Data Services accessible to companies that must keep their customer’s data onsite, such as those working within regulated industries or those which do not have an Azure datacenter within their country.

In the initial release, both Azure SQL Database and Azure Database for PostgreSQL Hyperscale will be available for on-premises deployments. Now organizations can run and offer database as a service (DBaaS) as a platform as a service (PaaS) offering to their tenants. This makes it easier for users to deploy and manage cloud databases on their own infrastructure, without the overhead of setting up and maintaining the infrastructure on a physical server or virtual machine. The Azure Data Services on Azure Arc still require an underlying Kubernetes cluster, but many management frameworks are supported by Microsoft Azure and Azure Arc.

All of the other Azure Arc benefits are included with the data services, such as automation, backup, monitoring, scaling, security, patching and cost management. Additionally, Azure Data Services can run on both connected and disconnected machines. The latest features and updates to the data services are automatically pushed down from Microsoft Azure to Azure Arc members so that the infrastructure is always current and consistent.

For more information about Azure Data Services with Azure Arc visit https://azure.microsoft.com/en-us/services/azure-arc/hybrid-data-services.

Cloud Availability with Azure Arc

One of the main advantages offered by Microsoft Azure is access to its unlimited hardware spread across multiple datacenters which provide business continuity. This gives Azure customers numerous ways to increase service availability, retain more backups, and gain disaster recovery capabilities. With the introduction of Azure Arc, these features provide even greater integration between on-premises servers and Microsoft Azure.

Azure Availability & Resiliency with Azure Arc

With Azure Arc, organizations can leverage Azure’s availability and resiliency features for their on-premises servers. Virtual Machine Scale Sets allow automatic application scaling by rapidly deploying dozens (or thousands) of VMs to quickly increase the processing capabilities of a cloud application. Integrated load-balancing will distribute network traffic, and redundancy is built into the infrastructure to eliminate single points of failure. VM Availability Sets give administrators the ability to select a group of related VMs and force them to distribute themselves across different physical servers. This is recommended for redundant servers or guest clusters where it is important to have each virtualized instanced spread out so that the loss of a single host will not take down an entire service. Azure Availability Zones extend this concept across multiple datacenters by letting organizations deploy datacenter-wide protection schemes that distribute applications and their data across multiple sites. Azure’s automated updating solutions are availability-aware so they will keep services online during a patching cycle, serially updating and rebooting a subset of hosts. Azure Arc helps hybrid cloud services take advantage of all of the Azure resiliency features.

For more information about Azure availability and resiliency visit https://azure.microsoft.com/en-us/features/resiliency.

Azure Backup & Restore with Azure Arc

Many organizations limit their backup plans because of their storage constraints since it can be costly to store large amounts of data which may not need to be accessed again. Azure Backup helps organizations by allowing their data to be backed up and stored on Microsoft Azure. This usually reduces costs as users are only paying for the storage capacity they are using. Additionally storing backups offsite helps minimize data loss as offsite backups provide resiliency to site-wide outages and can protect customers from ransomware. Azure Backup also offers compression, encryption and retention policies to help organizations in regulated industries. Azure Arc manages the backups and recovery of on-premises servers with Microsoft Azure, with the backups being stored in the customer’s own datacenter or in Microsoft Azure.

For more information about Azure Backup visit https://azure.microsoft.com/en-us/services/backup.

Azure Site Recovery & Geo-Replication with Azure Arc

One of the more popular hybrid cloud features enabled with Microsoft Azure is the ability to replicate data from an on-premises location to Microsoft Azure using Azure Site Recovery (ASR). This allows users to have a disaster recovery site without needing to have a second datacenter. ASR is easy to deploy, configure, operate and even tests disaster recovery plans. Using Azure Arc it is possible to set up geo-replication to move data and services from a managed datacenter running Windows Server Hyper-V, VMware vCenter or Amazon Web Services (AWS) public cloud. Destination datacenters can include other datacenters managed by Azure Arc, Microsoft Azure and Amazon AWS.

For more information about Azure Site Recovery visit https://azure.microsoft.com/en-us/services/site-recovery.

Cloud Management with Azure Arc

Azure Arc introduces some on-premises management benefits which were previously available only in Microsoft Azure. These help organizations administer legacy hardware and disconnected machines with Azure-consistent features using multiple management tools.

Management Tools with Azure Arc

One of the fundamental design concepts of Microsoft Azure is to have centralized management layers (“planes”) that support diverse hardware, data, and administrative tools. The fabric plane controls the hardware through a standard set of interfaces and APIs. The data plane allows unified management of structured and unstructured data. And the control plane offers centralized management through various interfaces, including the GUI-based Azure Portal, Azure PowerShell, and other APIs. Each of these layers interfaces with each other through a standard set of controls, so that the operational steps will be identical whether a user deploys a VM via the Azure Portal or via Azure PowerShell. Azure Arc can manage cloud resources with the following Azure Developer Tools:

At the time of this writing, the documentation for Azure Arc is not yet available, but some examples can be found in the quick start guides which are linked in the Getting Started with Azure Arc section.

Managing Legacy Hardware with Azure Arc

Azure Arc is hardware-agnostic, allowing Azure to manage a customer’s diverse or legacy hardware just like an Azure datacenter server. The hardware must meet certain requirements so that a virtualized Kubernetes cluster can be deployed on it, as Azure services run on this virtualized infrastructure. In the Public Preview, servers must be running Windows Server 2012 R2 (or newer) or Ubuntu 16.04 or 18.04. Over time, additional servers will be supported, with rumors of 32-bit (x86), Oracle and Linux hosts being supported as infrastructure servers.

Offline Management with Azure Arc

Azure Arc will even be able to manage servers that are not regularly connected to the Internet, as is common with the military, emergency services, and sea vessels. Azure Arc has a concept of “connected” and “disconnected” machines. Connected servers have an Azure Resource ID and are part of an Azure Resource group. If a server does not sync with Microsoft Azure every 5 minutes, it is considered disconnected yet it can continue to run its local resources. Microsoft Arc allows these organizations to use the latest Azure services when they are connected, yet still use many of these features if the servers do not maintain an active connection, including Azure Data Services. Even some services which run Azure AI and are hosted in Microsoft Azure can work disconnected environments while running on Azure Stack Edge.

Always Up-To-Date Tools with Azure Arc

One of the advantages of using Microsoft Azure is that all the services are kept current by Microsoft. The latest features, best practices, and AI learning are automatically available to all users in real-time as soon as they are released. When an admin logs into the Azure Portal through a web browser, they are immediately exposed to the latest technology to manage their distributed infrastructure. By ensuring that all users have the same management interface and APIs, Microsoft can guarantee consistency of behavior for all users across all hardware, including on-premises infrastructure when using Azure Arc. However, if the hardware is in a disconnected environment (such as on a sea vessel), there could be some configuration drift as older versions of Azure data services and Azure management tools may still be used until they are reconnected and synced.

Cloud Security & Compliance with Azure Arc

Public cloud services like Microsoft Azure are able to offer industry-leading security and compliance due to their scale and expertise. Microsoft employs more than 3,500 of the world’s leading security engineers who have been collaborating for decades to build the industry’s safest infrastructure. Through its billions of endpoints, Microsoft Azure leverages Azure AI to identify anomalies and detect threats before they become widespread. Azure Arc extends all of the security features offered in Microsoft Azure to on-premises infrastructure, including key vaults, monitoring, policies, security, threat protection, and update management.

Azure Key Vault with Azure Arc

When working in a distributed computing environment, managing credentials, passwords, and user access can become complicated. Azure Key Vault is a service that helps enhance data protection and compliance by securely protecting all keys and monitoring access. Azure Key Vault is supported by Azure Arc, allowing credentials for on-premises services and hybrid clouds to be centrally managed through Azure.

For more information about Azure Key Vault visit https://azure.microsoft.com/en-us/services/key-vault.

Azure Monitor with Azure Arc

Azure Monitor is a service that collects and analyzes telemetry data from Azure infrastructure, networks, and applications. The logs from managed services are sent to Azure Monitor where they are aggregated and analyzed. If a problem is identified, such as an offline server, it can trigger alerts or use Azure Automation to launch recovery workflows. Azure Arc can now monitor on-premises servers, networks, virtualization infrastructure, and applications, just like they were running in Azure. It even leverages Azure AI and Azure Automation to make recommendations and fixes to hybrid cloud infrastructure.

For more information about Azure Monitor visit https://azure.microsoft.com/en-us/services/monitor.

Azure Policy with Azure Arc

Most enterprises have certain compliance requirements for the IT infrastructure, especially those organizations within regulated industries. Azure Policy uses Microsoft Azure to audit an environment and aggregate all the compliance data into a single location. Administrators can get alerted about misconfigurations or configuration drifts and even trigger automated remediation using Azure Automation. Azure Policy can be used with Azure Arc to apply policies on all connect machines, providing the benefits of cloud compliance to on-premises infrastructure.

For more information about Azure Policy visit https://azure.microsoft.com/en-us/services/azure-policy.

Azure Security Center with Azure Arc

The Azure Security Center centralizes all security policies and protects the entire managed environment. When Security Center is enabled, the Azure monitoring agents will report data back from the servers, networks, virtual machines, databases, and applications. The Azure Security Center analytics engines will ingest the data and use AI to provide guidance. It will recommend a broad set of improvements to enhance security, such as closing unnecessary ports or encrypting disks. Perhaps most importantly it will scan all the managed servers and identify updates that are missing, and it can use Azure Automation and Azure Update Management to patch those vulnerabilities. Azure Arc extends these security features to connected machines and services to protect all registered resources.

For more information about Azure Security Center visit https://azure.microsoft.com/en-us/services/security-center

Azure Advanced Threat Protection with Azure Arc

Azure Advanced Threat Protection (ATP) helps the industry’s leading cloud security solution by looking for anomalies and potential attacks with Azure AI. Azure ATP will look for suspicious computer or user activities and report any alerts in real-time. Azure Arc lets organizations extend this cloud protect to their hybrid and on-premises infrastructure offering leading threat protection across all of their cloud resources.

For more information about Azure Advanced Threat Protection visit https://azure.microsoft.com/en-us/features/azure-advanced-threat-protection.

Azure Update Management with Azure Arc

Microsoft Azure automates the process of applying patches, updates and security hotfixes to the cloud resources it manages. With Update Management, a series of updates can be scheduled and deployed on non-compliant servers using Azure Automation. Update management is aware of clusters and availability sets, ensuring that a distributed workload remains online while its infrastructure is patched by live migrating running VMs or containers between hosts. Azure will centrally manage updates, assessment reports, deployment results, and can create alerts for failures or other conditions. Organizations can use Azure Arc to automatically analyze and patch their on-premises and connected servers, virtual machines, and applications.

For more information about Azure Update Management visit https://docs.microsoft.com/en-us/azure/automation/automation-update-management.

Role-Based Access Control (RBAC) with Azure Arc

Controlling access to different resources is a critical function for any organization to enforce security and compliance. Microsoft Azure Active Directory (Azure AD) allows its customers to define granular access control for every user or user role based on different types of permissions (read, modify, delete, copy, sharing, etc.). There are also over 70 user roles provided by Azure, such as a Global Administrator, Virtual Machine Contributor or Billing Administrator. Azure Arc lets businesses extend role-based access control (RBAC) managed by Azure to on-premises environments. This means that any groups, policies, settings, security principals and managed identities that were deployed by Azure AD can now access all managed cloud resources. Azure AD also provides auditing so it is easy to track any changes made by users or security principals across the hybrid cloud.

For more information about Role-Based Access Control visit https://docs.microsoft.com/en-us/azure/role-based-access-control/overview.

DevOps and Application Management with Azure Arc

Over the past few years, containers have become more commonplace as they provide certain advantages over VMs, allowing the virtualized applications and services to be abstracted from their underlying virtualized infrastructure. This means that containerized applications can be uniformly deployed anywhere with any tools so that users do not have to worry about the hardware configuration. This technology has become popular amongst application developers, enabling them to manage their entire application development lifecycle without having a dependency on the IT department to set up the physical or virtual infrastructure. This development methodology is often called DevOps. One of the key design requirements with Azure Arc was to make it hardware agnostic, so with Azure Arc, developers can manage their containerized applications the same way whether they are running in Azure, on-premises or in a hybrid configuration.

Azure Kubernetes Service (AKS) & Kubernetes App Management with Azure Arc

Kubernetes is a management tool that allows developers to deploy, manage and update their containers. Azure Kubernetes Service (AKS) is Microsoft’s Kubernetes service and this can be integrated with Azure Arc. This means that AKS can be used to manage on-premises servers running containers. In additional to Azure Kubernetes Service, Azure Arc can be integrated with other Kubernetes management platforms, including Amazon EKS, Google Kubernetes Engine, and IBM Kubernetes Service.

For more information about Azure Container Services visit https://azure.microsoft.com/en-us/product-categories/containers and for Azure Kubernetes Services (AKS) visit https://azure.microsoft.com/en-us/services/kubernetes-service.

Other DevOps Tools with Azure Arc

For container management on Azure Arc developers can use any of the common Kubernetes management platforms, including Azure Kubernetes Service, Amazon EKS, Google Kubernetes Engine, and IBM Kubernetes Service. All standard deployment and management operations are supported on Azure Arc hardware enabling cross-cloud management.

More information about the non-Azure management tools is provided on the section on Third-Party Management Tools.

DevOps On-Premises with Azure Arc

Many developers prefer to work on their own hardware and some are required to develop applications in a private environment to keep their data secure. Azure Arc allows developers to build and deploy their applications anywhere utilizing Azure’s cloud-based AI, security and other cloud features while retaining their data, IP or other valuable assets within their own private cloud. Additionally, Azure Active Directory can use role-based access control (RBAC) and Azure Policies to manage developer access to sensitive company resources.

Elastic Scalability & Rapid Deployment with Azure Arc

Containerized applications are designed to start quickly when running on a highly-available Kubernetes cluster. The app will bypass the underlying operating system, allowing it to be rapidly deployed and scaled. These applications can quickly grow to an unlimited capacity when deployed on Microsoft Azure. When using Azure Arc, the applications can be managed across public and private clouds. Applications will usually contain several containers types that can be deployed in different locations based on their requirements. A common deployment configuration for a two-tiered application is to deploy the web frontend on Microsoft Azure for scalability and the database in a secure private cloud backend.

Hybrid Cloud Integration with Azure Arc

Microsoft’s hybrid cloud initiatives over the past few years have included certifying on-premises software and hardware configurations known as Azure Stack. Azure Stack allows organizations to run Azure-like services on their own hardware in their datacenter. It allows organizations that may be restricted from using public cloud services to utilize the best parts of Azure within their own datacenter. Azure Stack is most commonly deployed by organizations that have requirements to keep their customer’s datacenter inhouse (or within their territory) for data sovereignty, making it popular for customers who could not adopt the Microsoft Azure public cloud. Azure Arc easily integrates with Azure Stack Hub, Azure Stack Edge, and all the Azure Stack HCI configurations, allowing these services to be managed from Azure.

For more information about Azure Stack visit https://azure.microsoft.com/en-us/overview/azure-stack.

Azure Stack Hub with Azure Arc

Azure Stack Hub (formerly Microsoft Azure Stack) offers organizations a way to run Azure services from their own datacenter, from a service provider’s site, or from within an isolated environment. This cloud platform allows users to deploy Windows VMs, Linux VMs and Kubernetes containers on hardware which they operate. This offering is popular with developers who want to run services locally, organizations which need to retain their customer’s data onsite, and groups which are regularly disconnected from the Internet, as is common with sea vessels or emergency response personnel. Azure Arc allows Azure Stack Hub nodes to run supported Azure services (like Azure Data Services) while being centrally managed and optimized via Azure. These applications can be distributed across public, private or hybrid clouds.

For more information about Azure Stack Hub visit https://docs.microsoft.com/en-us/azure-stack/user/?view=azs-1908.

Azure Stack Edge with Azure Arc

Azure Stack Edge (previously Azure Data Box Edge) is a virtual appliance which can run on any hardware in a datacenter, branch office, remote site or disconnected environment. It is designed to run edge computing workloads on Hyper-V VMs, VMware VMs, containers and Azure services. These edge servers will be optimized run IoT, AI and business workloads so that processing can happen onsite, rather than being sent across a network to a cloud datacenter for processing. When the Azure Stack Edge appliance is (re)connected to the network it transfers any data at high-speed, and data use can be optimized to run during off-hours. It supports machine learning capabilities through GPGA or GPU. Azure Arc can centrally manage Azure Stack Edge, its virtual appliances and physical hardware.

For more information about Azure Stack Edge visit https://azure.microsoft.com/en-us/services/databox/edge.

Azure Stack Hyperconverged Infrastructure (HCI) with Azure Arc

Azure Stack Hyperconverged Infrastructure (HCI) is a program which provides preconfigured hyperconverged hardware from validated OEM partners which are optimized to run Azure Stack. For businesses which want to run Azure-like services on-premises they can purchase or rent hardware which has been standardized to Microsoft’s requirements. VMs, containers, Azure services, AI, IOT and more can run consistency on the Microsoft Azure public cloud or Azure Stack HCI hardware in a datacenter. Cloud services can be distributed across multiple datacenters or clouds and centrally managed using Azure Arc.

For more information about Azure Stack HCI visit https://azure.microsoft.com/en-us/overview/azure-stack/hci.

Managed Service Providers (MSPs) with Azure Arc

Azure Lighthouse Integration with Azure Arc

Azure Lighthouse is a technology designed for managed service providers (MSPs), ISVs or distributed organizations which need to centrally manage their tenants’ resources. Azure Lighthouse allows service providers and tenants to create a two-way trust to allow unified management of cloud resources. Tenants will grant specific permissions for approved user roles on particular cloud resources, so that they can offload the management to their service provider. Now service providers can add their tenants’ private cloud environments under Azure Arc management, so that they can take advantage of the new capabilities which Azure Arc provides.

For more information about Azure Lighthouse visit https://azure.microsoft.com/en-us/services/azure-lighthouse or on the Altaro MSP Dojo.

Third-Party Integration with Azure Arc

Within the Azure management layer (control plane) exists Azure Resource Manager (ARM). ARM provides a way to easily create, manage, monitor and delete any Azure resource. Every native and third-party Azure resource uses ARM to ensure that it can be centrally managed through Azure management tools. Azure Arc now allows non-Azure resources to be managed by Azure. This can include third-party clouds (Amazon Web Services, Google Cloud Platform), Windows and Linux VMs, VMs on non-Microsoft hypervisors (VMware vSphere, Google Compute Engine, Amazon EC2), Kubernetes containers and clusters (including IMB Kubernetes Service, Google Kubernetes Engine and Amazon EKS). At the time of this writing limited information is available about third-party integration, but it will be added over time.

Amazon Web Services (AWS) Integration with Azure Arc

Amazon Web Services (AWS) is Amazon’s public cloud platform. Some services from AWS can be managed by Azure Arc. This includes operating virtual machines running on the Amazon Elastic Compute Cloud (EC2) and containers running on Amazon Elastic Kubernetes Service (EKS). Azure Arc also lets an AWS site be used as a geo-replicated disaster recovery location. AWS billing can also be integrated with Azure Cost Management & Billing so that expenses from both cloud providers can be viewed in a single location.

Additional information will be added once it is made available.

Google Cloud Platform (GCP) Integration with Azure Arc

Google Cloud Platform (GCP) is Google’s public cloud platform. Some services from GCP can be managed by Azure Arc. This includes operating virtual machines running on Google Compute Engine (GCE) and containers running on Google Kubernetes Engine (GKE).

Additional information will be added once it is made available.

IBM Kubernetes Service Integration with Azure Arc

IBM Cloud is IBM’s public cloud platform. Some services from IBM Cloud can be managed by Azure Arc. This includes operating containers running on IBM Kubernetes Service (Kube).

Additional information will be added once it is made available.

Linux VM Integration with Azure Arc

In 2014 Microsoft’s CEO Satya Nadella declared, “Microsoft loves Linux”. Since then the company has embraced Linux integration, making Linux a first-class citizen in its ecosystem. Microsoft even contributes code to the Linux kernel so that it operates efficiently when running as a VM or container on Microsoft’s operating systems. Virtually all management features for Windows VMs are available to supported Linux distributions, and this extends to Azure Arc. Azure Arc admins can use Azure to centrally create, manage and optimize Linux VMs running on-premises, just like any standard Windows VM.

VMware Cloud Solution Integration with Azure Arc

VMware offers a popular virtualization platform and management studio (vSphere) which runs on VMware’s hypervisor. Microsoft has acknowledged that many customers are running legacy on-premises hardware are using VMware, so they provide numerous integration points to Azure and Azure Arc. Organizations can even virtualize and deploy their entire VMware infrastructure on Azure, rather than in their own datacenter. Microsoft makes it easy to deploy, manage, monitor and migrate VMware system and with Azure Arc businesses can now centrally operate their on-premises VMware infrastructure too. While the full management functionality of VMware vSphere is not available through Azure Arc, most standard operations are supported.

For more information about VMware Management with Azure visit https://azure.microsoft.com/en-us/overview/azure-vmware.


Go to Original Article
Author: Symon Perriman

New AWS cost management tool, instance tactics to cut cloud bills

LAS VEGAS — Amazon continuously rolls out new discounting programs and AWS cost management tools in an appeal to customers’ bottom lines and as a hedge against mounting competition from Microsoft and Google.

Companies have grappled with nasty surprises on their AWS bills for years, with the reasons attributed to AWS’ sheer complexity, as well as the runaway effect on-demand computing can engender without strong governance. It’s a thorny problem with a solution that can come in multiple forms.

To that end, the cloud giant released a number of new AWS cost management tools at re:Invent, including Compute Optimizer, which uses machine learning to help customers right-size their EC2 instances.

At the massive re:Invent conference here this week, AWS customers discussed how they use both AWS-native tools and their own methods to get the most value from their cloud budgets.

Ride-sharing service Lyft has committed to spend at least $300 million on AWS cloud services between the beginning of this year and the end of 2021.

Lyft, like rival Uber, saw a hockey stick-like growth spurt in recent years, going from about 50 million rides in 2015 to more than 350 million a few years later. But its AWS cost management needed serious work, said Patrick Valenzuela, engineering manager.

An initial effort to wrangle AWS costs resulted in a spreadsheet, powered by a Python script, that divided AWS spending by the number of rides given to reach an average figure. The spreadsheet also helped Lyft rank engineering teams according to their rate of AWS spending, which had a gamification effect as teams competed to do better, Valenzuela said in a presentation.

Within six months, Lyft managed to drop the AWS cost-per-ride figure by 40%. But it needed more, such as fine-grained data sets that could be probed via SQL queries. Other factors, such as discounts and the cost of AWS Reserved Instances, weren’t always reflected transparently in the AWS-provided cost usage reports used to build the spreadsheet.

Lyft subsequently built a second-generation tool that included a data pipeline fed into a data warehouse. It created a reporting and dashboard layer on top of that foundation. The results have been promising. Earlier this year, Lyft found it was now spending 50% less on read/writes for its top 25 DynamoDB tables and also saved 50% on spend related to Kubernetes container migrations.

 “If you want to learn more about AWS, I recommend digging into your bill,” Valenzuela said.

AWS cost management a perennial issue

While there are plenty of cloud cost management tools available in addition to the new AWS Compute Optimizer, some AWS customers take a proactive approach to cost savings, compared to using historical analysis to spot and shed waste, as Lyft did in the example presented at re:Invent.

Privately held mapping data provider Here Technologies serves 100 million motor vehicles and collects 28 TB of data each day. Companies have a choice in the cloud procurement process — one being to force teams through rigid sourcing activities, said Jason Fuller, head of cloud management and operations at Here.

“Or, you let the builders build,” he said during a re:Invent presentation. “We let the builders build.”

Still, Here had developed a complex landscape on AWS, with more than 500 accounts that collectively spun up more than 10 million EC2 instances a year. A few years ago, Here began a concerted effort to adopt AWS Reserved Instances in a programmatic manner, hoping to squeeze out waste.

Reserved Instances carry contract terms of up to three years and offer substantial savings over on-demand pricing. Here eventually moved nearly 80% of its EC2 usage into Reserved Instances, which gave it about 50% off the on-demand rate, Fuller said.

The results have been impressive. During the past three-and-a-half years, Here saved $50 million and avoided another $150 million in costs, Fuller said.

Salesforce is another heavy user of AWS. It signed a $400 million infrastructure deal with AWS in 2016 and the companies have since partnered on other areas. Based on its 2017 acquisition of Krux, Salesforce now offers Audience Studio, a data management platform that collects and analyzes vast amounts of audience information from various third-party sources. It’s aimed at marketers who want to run more effective digital advertising campaigns.

Audience Studio handles 200,000 user queries per second, supported by 2,500 Elastic MapReduce Clusters on AWS, said Alex Estrovitz, director of software engineering at Salesforce.

“That’s a lot of compute, and I don’t think we’d be doing it cost-effectively without using [AWS Spot Instances],” Estrovitz said in a re:Invent session. More than 85% of Audience Studio’s infrastructure uses Spot Instances, which are made up of idle compute resources on AWS and cost up to 90% less than on-demand pricing.

But Spot Instances are best suited for jobs like Audience Studio’s, where large amounts of data get parallel-processed in batches across large pools of instances. Spot Instances are ephemeral; AWS can shut them down upon a brief notice when the system needs resources for other customer jobs. However, customers like Salesforce can buy Spot Instances based on their application’s degree of tolerance for interruptions.

Salesforce has achieved 48% savings overall since migrating Audience Studio to Spot Instances, Estrovitz said. “If you multiply this over 2,500 jobs every day, we’ve saved an immense amount of money.”

Go to Original Article
Author:

Cisco cries foul over security flaw in Zoom Connector

Cisco slammed rival Zoom for a security lapse that left the management portals of many video devices exposed to the public internet. It’s an unusually public spat between two of the industry’s leading video conferencing providers.

The dispute revolves around Zoom Connector, a gateway that connects standards-based video devices to the Zoom cloud. In addition to providing a management portal for the hardware, the service makes it possible to join Zoom meetings with one click.

The Zoom Connector previously allowed anyone with the correct URL to access the admin portal for Cisco, Poly and Lifesize devices from the public internet without login credentials, according to Cisco. That would have let a hacker commandeer a company’s video systems, potentially allowing them to eavesdrop on conference rooms.

Zoom released a patch last week that password-protected access to the control hub via those URLs. But in a blog post this week, Cisco said the quick fix did not go far enough, alerting customers that Zoom’s connector service did not meet Cisco’s security standards.

To create the connector, Zoom built a link between the Zoom cloud and a Cisco web server running within a corporate network, said Sri Srinivasan, general manager of Cisco’s team collaboration group. The configuration provides a point of access to the endpoints that lies outside the network firewall. 

“You don’t want to have firewall settings open for a management interface of this sort, even [when] password-protected,” Srinivasan said.

Similarly, in a statement Tuesday, Lifesize said it considered Zoom Connector an unauthorized integration “built in an inherently insecure way.” However, the company concluded that the security flaw spotlighted by Cisco did not put customers in immediate risk.

In a statement Tuesday, Zoom said it considered the issue fully resolved. While insisting customers were safe, Zoom said it did advise companies to check device logs for unusual activity or unauthorized access.

Zoom added that it was not aware of any instances of hackers exploiting the vulnerability. The URLs necessary to access a device’s management portal are long and complicated, similar to a link to a Google Doc or an unlisted YouTube video. Most likely, a hacker would have needed to first gain access to an admin’s browser history to exploit the flaw.

Zoom has come under fire before for security shortfalls. Experts criticized the vendor in July for quietly installing a web server on Mac computers. The software left users vulnerable to being forcibly joined to a meeting with their video cameras turned on.

Cisco has raised issues with Zoom about the connector in the past, but only became aware of the URL vulnerability on Oct. 31, Srinivasan said. A customer who wished to remain anonymous reported the problem to Cisco and Zoom around the same time, he said. Zoom patched the issue on Nov. 19, one day after Cisco said it contacted the company about the problem. 

Adding fuel to the fire, Zoom has been using the Cisco logo on its connector’s admin portal. Cisco said this likely led customers to believe they were accessing a website supported by Cisco.

“This has been going on for a long, long time,” Srinivasan said. “Now, we know better to make sure we check everything Zoom does.”

But it seems unlikely Zoom will heed Cisco’s directive to obtain certification of the service. The vendor has a financial stake in the matter, as it charges customers $499 per year, per port for Zoom Connector.

Zoom has emerged in recent years as perhaps Cisco’s biggest competitor in the video conferencing market. Eric Yuan resigned as Cisco’s vice president of engineering to start Zoom in 2011. Yuan was one of the chief architects of the Webex video conferencing software that Cisco acquired in 2007.

In the coming months, Cisco is planning to release a SIP-based integration for Zoom and other leading video conferencing providers. The technology would let users join third-party meetings with one click from a Cisco device.

Cisco already supports SIP-based interoperability. But taking advantage of it requires businesses to build an integration themselves or pay for a third-party service. Srinivasan said the forthcoming SIP integration would eliminate the need for a service like Zoom Connector.

Go to Original Article
Author:

Kronos introduces its latest time clock, Kronos InTouch DX

Workforce management and HR software vendor Kronos this week introduced Kronos InTouch DX, a time clock offering features including individualized welcome screens, multilanguage support, biometric authentication and integration with Workforce Dimensions.

The new time clock is aimed at providing ease of use and more personalization for employees.

“By adding consumer-grade personalization with enterprise-level intelligence, Kronos InTouch DX surfaces the most important updates first, like whether a time-off request has been approved or a missed punch needs to be resolved,” said Bill Bartow, vice president and global product management at Kronos.

InTouch DX works with Workforce Dimensions, Kronos’ workforce management suite, so when a manager updates the schedule, employees can see those updates instantly on the Kronos InTouch DX and when employees request time off through the Kronos InTouch DX, managers are notified in Workforce Dimensions, according to the company.

Workforce Dimensions is mobile-native and accessible on smartphones and tablets.

Other features of InTouch DX include:

  • Smart Landing: Provides a personal welcome screen alerting users to unread messages, time-off approvals or requests, shifts swap and schedule updates.
  • Individual Mode: Provides one-click access to a user’s most frequent self-service tasks such as viewing their schedule, checking their accruals bank or transferring job codes.
  • My Time: Combines an individual’s timecard and weekly schedule, providing an overall view so that employees can compare their punches to scheduled hours to avoid errors.
  • Multilanguage support: Available for Dutch, English, French (both Canadian and French), German, Japanese, Spanish, Traditional and Simplified Chinese, Danish, Hindi, Italian, Korean, Polish and Portuguese.
  • Optional biometric authentication: Available as an option for an extra layer of security or in place of a PIN number or a badge. The InTouch DX supports major employee ID Badge formats, as well as PIN/employee ID numbers.
  • Date and time display: Features an always-on date and time display on screen.
  • Capacitive touchscreen: Utilizes capacitive technology used in consumer electronic devices to provide precision and reliability.

“Time clocks are being jolted in the front of workers’ visibility with new platform capabilities that surpass the traditional time clock hidden somewhere in a corner. Biometrics, especially facial recognition, are key to accelerate and validate time punches,” said Holger Mueller, vice president and principal analyst at Constellation Research.

When it comes to purchasing a product like this, Mueller said organizations should look into a software platform. “[Enterprises] need to get their information and processes on it, it needs to be resilient, sturdy, work without power, work without connectivity and gracefully reconnect when possible,” he said.

Other vendors in the human capital management space include Workday, Paycor and WorkForce Software. Workday platform’s time-tracking and attendance feature works on mobile devices and provide real-time analytics to aid managers’ decisions. Paycor’s Time and Attendance tool offers a mobile punching feature that can verify punch locations and enable administrators to set location maps to ensure employees punch in or near the correct work locations. WorkForce’s Time and Attendance tool automates pay rules for hourly, salaried or contingent workforce.

Go to Original Article
Author:

Dell EMC upgrades VxRail appliances for AI, SAP HANA

Dell EMC today added predictive analytics and network management to its VxRail hyper-converged infrastructure family while expanding NVMe support for SAP HANA and AI workloads.

Dell EMC VxRail appliances combine Dell PowerEdge servers and Dell-owned VMware’s vSAN hyperconverged infrastructure (HCI) software. The launch of Dell’s flagship HCI platform includes two new all-NVMe appliance configurations, plus VxRail Analytic Consulting Engine (ACE) and support for SmartFabric Services (SFS) across multi-rack configurations.

The new Dell EMC VxRail appliance models are the P580N and the E560N. The P580N is a four-socket system designed for SAP HANA in-memory database workloads. It is the first appliance in the VxRail P Series performance line to support NVMe. The 1u E560N is aimed at high performance computing and compute-heavy workloads such as AI and machine learning, along with virtual desktop infrastructure.

The new 1U E Series systems support Nvidia T4 GPUs for extra processing power. The E Series also supports 8 TB solid-state drives, doubling the total capacity of previous models. The VxRail storage-heavy S570 nodes also now support the 8 TB SSDs.

ACE is generally available following a six-month early access program. ACE, developed on Dell’s Pivotal Cloud Foundry platform, performs monitoring and performance analytics across VxRail clusters. ACE provides alerts for possible system problems, capacity analysis and can help orchestrate upgrades.

The addition of ACE to VxRail comes a week after Dell EMC rival Hewlett Packard Enterprise made its InfoSight predictive analytics available on its SimpliVity HCI platform.

Wikibon senior analyst Stuart Miniman said the analytics, SFS and new VxRail appliances make it easier to manage HCI while expanding its use cases.

“Hyperconverged infrastructure is supposed to be simple,” he said. “When you add in AI and automated operations, that will make it simpler. We’ve been talking about intelligence and automation of storage our whole careers, but there has been a Cambrian explosion in that over the last year. Now they’re building analytics and automation into this platform.”

Bringing network management into HCI

Part of that simplicity includes making it easier to manage networking in HCI. Expanded capabilities for SFS on VxRail include the ability for HCI admins to manage networking switches across VxRail clusters without requiring dedicated networking expertise. SFS now applies across multi-rack VxRail clusters, automating switch configuration for up to six racks in one site. SFS supports from six switches in a two-rack configuration to 14 switches in a six-rack deployment.

Support for Mellanox 100 Gigabit Ethernet PCIe cards help accelerate streaming media and live broadcast functions.

“We believe that automation across the data center is key to fostering operational freedom,” Gil Shneorson, Dell EMC vice president and general manager for VxRail, wrote in a blog with details of today’s upgrades. “As customers expand VxRail clusters across multiple racks, their networking needs expand as well.”

Dell EMC VxRail vs. Nutanix: All about the hypervisor?

IDC lists Dell as the leader in the hyperconverged appliance market, which IDC said hit $1.8 billion in the second quarter of 2019. Dell had 29.2% of the market, well ahead of second-place Nutanix with 14.2%. Cisco was a distant third with 6.2.%

According to Miniman, the difference between Dell EMC and Nutanix often comes down to the hypervisor deployed by the user. VxRail closely supports market leader VMware, but VxRail appliances do not support other hypervisors. Nutanix supports VMware, Microsoft Hyper-V and the Nutanix AHV hypervisors. The Nutanix software stack competes with vSAN.

“Dell and Nutanix are close on feature parity,” Miniman said. “If you’re using VMware, then VxRail is the leading choice because it’s 100% VMware. VxRail is in lockstep with VMware, while Nutanix is obviously not in lockstep with VMware.”

Go to Original Article
Author:

HPE brings InfoSight AI to SimpliVity HCI

Hewlett Packard Enterprise has made its InfoSight predictive analytics resource management capabilities available on its HPE SimpliVity hyper-converged infrastructure platform.

InfoSight provides capacity utilization reports and forecasts, and sends alerts of possible problems before users run out of capacity. HPE acquired InfoSight when it bought Nimble Storage in March 2017, two months after it acquired early hyper-converged infrastructure (HCI) startup SimpliVity. HPE ported Nimble’s InfoSight to its flagship 3 PAR arrays and it is used in its new Primera storage, as well as the ProLiant servers that SimpliVity runs on.

HPE is also connecting SimpliVity to its StoreOnce backup appliances, allowing customers to move data from SimpliVity nodes to the StoreOnce deduplication back boxes.

HPE disclosed plans to bring InfoSight to SimpliVity in June, and it is now generally available as part of the SimpliVity service agreement. StoreOnce integration with SimpliVity HCI is planned for the first half of 2020.

SimpliVity HCI trails Dell, Nutanix, Cisco

HPE has lagged its major server rivals in HCI sales, particularly Dell. IDC listed SimpliVity as fourth in branded HCI revenue in the second quarter with $83 million, only 2.3% of the market. No. 1 Dell ($533 million) and No. 2 Nutanix ($259 million) combined for nearly half of the total market share, with Cisco third at $114 million and 6.2 % share according to IDC.

HPE’s commitment to SimpliVity has also been questioned by its hedging on HCI products. HPE makes Nutanix technology available as part of its GreenLake as-a-service program, and Nutanix sells its software bundled on HPE ProLiant hardware. HPE customers can also use its servers with VMware vSAN HCI software. And HPE this year launched Nimble Storage dHCI, a disaggregated platform that is not true HCI but competes with HCI products while allowing a greater degree of independent scaling of compute and storage resources. Nimble dHCI is also generally available this week.

HCI ‘comes down to data’

Pittsburgh-based trucking company Pitt Ohio has been a SimpliVity customer since before HPE acquired the HCI pioneer. Systems engineer Justin Brooks said he was familiar with InfoSight as a previous Nimble Storage customer, so he signed up for the beta program on SimpliVity. Brooks said he has used InfoSight since June, and finds it significantly aids him in managing capacity on his 19 HCI nodes used for primary storage and disaster recovery. 

“Most of it comes down to data – how much you’re replicating, and how much data is on there versus what the hypervisor supports,” Brooks said. “The InfoSight intelligence and prediction capabilities are great for SimpliVity, because on any hyper-convergence platform it’s all about scalability. You need to know when to scale out or move things around, so you can plan accordingly. Hyper-converged is not dirt cheap either, especially when it’s all-flash. It’s important to make sure you’re getting your money’s worth out of the resources.”

Brooks said he previously employed “guesstimates and fuzzy math” to predict SimpliVity HCI growth, but InfoSight does those predictions for him now. InfoSight data growth patterns over the past 30-, 60- and 90-day periods.

“You’re always worried how big your data sets are growing, especially on the SQL Server side,” he said. “You don’t get as high efficiency with dedupe and compression on SQL data as with file data. With SimpliVity you have to dig into the CLI and get deep in there, or see what was sent over from the production side or the DR side. InfoSight shows you that data more granularly.”

Brooks said he was concerned about HPE’s plans for SimpliVity when it made the acquisition in 2017, but he’s happy with its commitment. Pitt Ohio took advantage of an HPE buyback program to convert its SimpliVity OmniCubes that used Dell hardware into ProLiant-based SimpliVity nodes. Pitt Ohio had nine SimpliVity nodes before the HPE acquisition, and is up to 19 now. Brooks estimates 98% of his applications run on SimpliVity HCI. The trucking company is a VMware shop and first got into SimpliVity HCI for virtual desktop infrastructure. It has since switched from Cisco UCS servers and a variety of storage, including Dell EMC VMAX and Data Domain and Nimble arrays.

“We had a Frankenblock infrastructure,” Brooks said. “When we needed to refresh hardware, our options were to forklift everything in the data center or get on the hyper-converged route.”

Pitt Ohio now has one SimpliVity cluster for Microsoft SQL Server, another cluster for all other production workloads and a third for QA.

Brooks said he uses SimpliVity for data protection, but is considering adding a Cohesity backup appliance. That is so he can move file data to cheaper storage, and HPE sells Cohesity software on HPE Apollo servers. “We want to get some files off of SimpliVity because I’d rather not use all that flash disk for files,” Brooks said.

Go to Original Article
Author:

Enterprise IT weighs pros and cons of multi-cloud management

Multi-cloud management among enterprise IT shops is real, but the vision of routine container portability between clouds has yet to be realized for most.

Multi-cloud management is more common as enterprises embrace public clouds and deploy standardized infrastructure automation platforms, such as Kubernetes, within them. Most commonly, IT teams look to multi-cloud deployments for workload resiliency and disaster recovery, or as the most reasonable approach to combining companies with loyalty to different public cloud vendors through acquisition.

“Customers absolutely want and need multi-cloud, but it’s not the old naïve idea about porting stuff to arbitrage a few pennies in spot instance pricing,” said Charles Betz, analyst at Forrester Research. “It’s typically driven more by governance and regulatory compliance concerns, and pragmatic considerations around mergers and acquisitions.”

IT vendors have responded to this trend with a barrage of marketing around tools that can be used to deploy and manage workloads across multiple clouds. Most notably, IBM’s $34 billion bet on Red Hat revolves around multi-cloud management as a core business strategy for the combined companies, and Red Hat’s OpenShift Container Platform version 4.2 updated its Kubernetes cluster installer to support more clouds, including Azure and Google Cloud Platform. VMware and Rancher also use Kubernetes to anchor multi-cloud management strategies, and even cloud providers such as Google offer products such as Anthos with the goal of managing workloads across multiple clouds.

For some IT shops, easier multi-cloud management is a key factor in Kubernetes platform purchasing decisions.

“Every cloud provider has hosted Kubernetes, but we went with Rancher because we want to stay cloud-agnostic,” said David Sanftenberg, DevOps engineer at Cardano Risk Management Ltd, an investment consultancy firm in the U.K. “Cloud outages are rare, but it’s nice to know that on a whim we can spin up a cluster in another cloud.”

Multi-cloud management requires a deliberate approach

With Kubernetes and VMware virtual machines as common infrastructure templates, some companies use multiple cloud providers to meet specific business requirements.

Unified communications-as-a-service provider 8×8, in San Jose, Calif., maintains IT environments spread across 15 self-managed data centers, plus AWS, Google Cloud Platform, Tencent and Alibaba clouds. Since the company’s business is based on connecting clients through voice and video chat globally, placing workloads as close to customers’ locations as possible is imperative, and this makes managing multiple cloud service providers worthwhile. The company’s IT ops team keeps an eye on all its workloads with VMware’s Wavefront cloud  monitoring tool.

Dejan Deklich, chief product officer, 8x8Dejan Deklich

 “It’s all the same [infrastructure] templates, and all the monitoring and dashboards stay exactly the same, and it doesn’t really matter where [resources] are deployed,” said Dejan Deklich, chief product officer at 8×8. “Engineers don’t have to care where workloads are.”

Multiple times a year, Deklich estimated, the company uses container portability to move workloads between clouds when it gets a good deal on infrastructure costs, although it doesn’t move them in real time or spread apps among multiple clouds. Multi-cloud migration also only applies to a select number of 8×8’s workloads, Deklich said.

We made a conscious decision that we want to be able to move from cloud to cloud. It depends on how deep you go into integration with a given cloud provider.
Dejan DeklichChief product officer, 8×8

“If you’re in [AWS] and using RDS, you’re not going to be able to move to Oracle Cloud, or you’re going to suffer connectivity issues; you can make it work, but why would you?” he said. “There are workloads that can elegantly be moved, such as real-time voice or video distribution around the world, or analytics, as long as you have data associated with your processing, but moving large databases around is not a good idea.”

Maintaining multi-cloud portability also requires a deliberate approach to integration with each cloud provider.

“We made a conscious decision that we want to be able to move from cloud to cloud,” Deklich said. “It depends on how deep you go into integration with a given cloud provider — moving a container from one to the other is no problem if the application inside is not dependent on a cloud-specific infrastructure.”

The ‘lowest common denominator’ downside of multi-cloud

Not every organization buys in to the idea that multi-cloud management’s promise of freedom from cloud lock-in is worthwhile, and the use of container portability to move apps from cloud to cloud remains rare, according to analysts.

“Generally speaking, companies care about portability from on-premises environments to public cloud, not wanting to get locked into their data center choices,” said Lauren Nelson, analyst at Forrester Research. “They are far less cautious when it comes to getting locked into public cloud services, especially if that lock in comes with great value.”

Generally speaking, companies care about portability from on-premises environments to public cloud, not wanting to get locked into their data center choices. They are far less cautious when it comes to getting locked into public cloud services …
Lauren NelsonAnalyst, Forrester Research

In fact, some IT pros argue that lock-in is preferable to missing out on the value of cloud-specific secondary services, such as AWS Lambda.

“I am staunchly single cloud,” said Robert Alcorn, chief architect of platform and product operations at Education Advisory Board (EAB), a higher education research firm headquartered in Washington, D.C. “If you look at how AWS has accelerated its development over the last year or so, it makes multi-cloud almost a nonsensical question.”

For Alcorn, the value of integrating EAB’s GitLab pipelines with AWS Lambda outweighs the risk of lock-in to the AWS cloud. Connecting AWS Lambda and API Gateway to Amazon’s SageMaker for machine learning  has also represented almost a thousandfold drop in costs compared to the company’s previous container-based hosting platform, he said.

Even without the company’s interest in Lambda integration, the work required to keep applications fully cloud-neutral isn’t worth it for his company, Alcorn said.

“There’s a ceiling to what you can do in a truly agnostic way,” he said. “Hosted cloud services like ECS and EKS are also an order of magnitude simpler to manage. I don’t want to pay the overhead tax to be cloud-neutral.”

Some IT analysts also sound a note of caution about the value of multi-cloud management for disaster recovery or price negotiations with cloud vendors, depending on the organization. For example, some financial regulators require multi-cloud deployments for risk mitigation, but the worst case scenario of a complete cloud failure or the closure of a cloud provider’s entire business is highly unlikely, Forrester’s Nelson wrote in a March 2019 research report, “Assess the Pain-Gain Tradeoff of Multicloud Strategies.”

Splitting cloud deployments between multiple providers also may not give enterprises as much of a leg up in price negotiations as they expect, unless the customer is a very large organization, Nelson wrote in the report.

The risks of multi-cloud management are also manifold, according to Nelson’s report, from high costs for data ingress and egress between clouds to network latency and bandwidth issues, broader skills requirements for IT teams, and potentially double the resource costs to keep a second cloud deployment on standby for disaster recovery.

Of course, value is in the eye of the beholder, and each organization’s multi-cloud mileage may vary.

“I’d rather spend more for the company to be up and running, and not lose my job,” Cardano’s Sanftenberg said.

Go to Original Article
Author:

ArubaOS-CX upgrade unifies campus, data center networks

Aruba’s latest switching hardware and software unifies network management and analytics across the data center and campus. The approach to modern networking is similar to the one that underpins rival Cisco’s initial success with enterprises upgrading campus infrastructure.

Aruba, a Hewlett Packard Enterprise company, launched this week its most significant upgrade to the two-year-old ArubaOS-CX (AOS-CX) network operating system. With the NOS improvements, Aruba unveiled two series of switches, the stackable CX 6300 and the modular CX 6400. Together, the hardware covers access, aggregation and core uses. 

The latest releases arrive a year after HPE transferred management of its data center networking group to Aruba. The latter company is also responsible for HPE’s FlexNetwork line of switches and software.

The new CX hardware is key to taking AOS-CX to the campus, where companies can take advantage of the software’s advanced features. As modular hardware, the 6400 can act as an aggregation or core switch, while the 6300 drives the access layer of the network where traffic comes from wired or wireless mobile or IoT devices.

For the data center, Aruba has the 8400 switch series  that also run AOS-CX. The hardware marked Aruba’s entry into the data center market, where it has to build credibility.

“Many non-Aruba customers and some Aruba campus customers are likely to take a wait-and-see posture,” said Brad Casemore, an analyst at IDC. 

ArubaOS-CX everywhere  

Nevertheless, having one NOS powering all the switches does make it possible to manage them with the Aruba software that runs on top of AOS-CX. Available software includes products for network management, analytics and access control. 

For the wired and wireless LAN, Aruba has ClearPass, which lets organizations set access policies for groups of IoT and mobile devices; and Central, a cloud-based management console. For the data center, Aruba has HPE SimpliVity, which provides automated switch configurations during deployment of Aruba and HPE switches.

CX switches
Aruba’s new line of CX 6300and 6400 switches

New features in the latest version of ArubaOS-CX include Dynamic Segmentation that lets enterprises assign polices to wired client devices based on port or user role. Other enhancements include support for an Ethernet VPN over VXLAN for data center connectivity.

Also, within the new 10.4 version of AOS-CX, Aruba integrated the Network Analytics Engine (NAE) with Aruba’s NetEdit software for orchestration of multiple switch configurations. NAE is a framework built into AOS-CX that lets enterprises monitor, troubleshoot and collect network data through the use of scripting agents.

Aruba vs. Cisco

How well Aruba’s unification strategy for networking can compete with Cisco’s remains to be seen. The latter company has had significant success with the Catalyst 9000 campus switching line introduced in 2017 with Cisco’s DNA Center management console. Some organizations use the DNA product in data center networking.

In the first quarter of 2019, Cisco’s success with the Catalyst 9000 boosted  its revenue share of the campus switching market by 5 points, according to the research firm Dell’Oro Group. During the same quarter, the combined revenue of the other vendors, which included HPE, declined.

In September, Gartner listed Cisco and Aruba as the leaders in the research firm’s Magic Quadrant for Wired and Wireless LAN Access Infrastructure.

Competition is fierce in the campus infrastructure market because enterprises are just starting to upgrade networks. Driving the current upgrade cycle is the switch to Wi-Fi 6 — the next-generation wireless standard that can support more devices than the present technology.

Wi-Fi 6 lets enterprises add to their networks IoT devices ranging from IP telephones and surveillance cameras to medical devices and handheld computers. The latter is used in warehouses and on the factory floor.

That transition will drive companies to deploy aggregation and access switches with faster port speeds and PoE ports to power wired IoT gear.

Enterprises skeptical of cross-domain networking

Aruba, Cisco and other networking vendors pushing a unified campus and data center haven’t convinced many enterprises to head in that direction, IDC analyst Brandon Butler said. Adopting that cross-domain technology would require significant changes in current operations, which typically have separate IT teams responsible for the campus and the data center.

IDC has not spoken to many enterprises that have centralized management across domains, Butler said. “This idea that you’re going to have a single pane of glass across the data center and the campus and out to the edge, I just don’t know if the industry is quite there yet.”

Meanwhile, Aruba’s focus on its CX portfolio has left some industry observers wondering whether it would diminish the development of FlexNetwork switches and software. 

However, Michael Dickman, VP of Aruba product line management, said the company plans to fully support its FlexNetwork architecture “in parallel” with the CX portfolio.

Go to Original Article
Author:

What are the Azure Stack HCI deployment, management options?

There are several management approaches and deployment options for organizations interested in using the Azure Stack HCI product.

Azure Stack HCI is a hyper-converged infrastructure product, similar to other offerings in which each node holds processors, memory, storage and networking components. Third-party vendors sell the nodes that can scale should the organization need more resources. A purchase of Azure Stack HCI includes the hardware, Windows Server 2019 operating system, management tools, and service and support from the hardware vendor. At time of publication, Microsoft’s Azure Stack HCI catalog lists more than 150 offerings from 19 vendors.

Azure Stack HCI, not to be confused with Azure Stack, gives IT pros full administrator rights to manage the system.

Tailor the Azure Stack HCI options for different needs

The basic components of an Azure Stack HCI node might be the same, but an organization can customize them for different needs, such as better performance or lowest price. For example, a company that wants to deploy a node in a remote office/branch office might select Lenovo’s ThinkAgile MX Certified Node, or its SR650 model. The SR650 scales to two nodes that can be configured with a variety of processors offering up to 28 cores, up to 1.5 TB of memory, hard drive combinations providing up to 12 TB (or SSDs offering more than 3.8 TB), and networking with 10/25 GbE. Each node comes in a 2U physical form factor.

If the organization needs the node for more demanding workloads, one option is the Fujitsu Primeflex. Azure Stack HCI node models such as the all-SSD Fujitsu Primergy RX2540 M5 scale to 16 nodes. Each node can range from 16 to 56 processor cores, up to 3 TB of SSD storage and 25 GbE networking.

Management tools for Azure Stack HCI systems

Microsoft positions the Windows Admin Center (WAC) as the ideal GUI management tool for Azure Stack HCI, but other familiar utilities will work on the platform.

Microsoft positions the Windows Admin Center (WAC) as the ideal GUI management tool for Azure Stack HCI, but other familiar utilities will work on the platform.

The Windows Admin Center is a relatively new browser-based tool for consolidated management for local and remote servers. The Windows Admin Center provides a wide array of management capabilities, such as managing Hyper-V VMs and virtual switches, along with failover and hyper-converged cluster management. While it is tailored for Windows Server 2019 — the server OS used for Azure Stack HCI — it fully supports Windows Server 2012/2012 R2 and Windows Server 2016, and offers some functionality for Windows Server 2008 R2.

Azure Stack HCI users can also use more established management tools such as System Center. The System Center suite components handle infrastructure provisioning, monitoring, automation, backup and IT service management. System Center Virtual Machine Manager provisions and manages the resources to create and deploy VMs, and handle private clouds. System Center Operations Manager monitors services, devices and operations throughout the infrastructure.

Other tools are also available including PowerShell, both the Windows and the PowerShell Core open source versions, as well as third-party products, such as 5nine Manager for Windows Server 2019 Hyper-V management, monitoring and capacity planning.

It’s important to check over each management tool to evaluate its compatibility with the Azure Stack HCI platform, as well as other components of the enterprise infrastructure.

Go to Original Article
Author: