Tag Archives: hybrid

For Sale – Dell Latitude 2-in-1 (m7, 8GB, 512GB, 4K touchscreen)

Dell Latitude 7275 hybrid laptop. Top spec model with expensive accessories (stylus, backlit keyboard/case).
Top end 4k 12.5 inch multitouch screen. CPU Intel Core m7-6Y75, 8GB memory and 512GB Samsung NVMe SSD. Still under warranty (March 2020)

Location
bristol
Price and currency
380
Delivery cost included
Delivery is NOT included
Prefer goods collected?
I have no preference
Advertised elsewhere?
Not advertised elsewhere
Payment method
BT

Go to Original Article
Author:

New Oracle Enterprise Manager release advances hybrid cloud

In a bid to meet customers’ needs for hybrid cloud deployments, Oracle has injected its Oracle Enterprise Manager system with new capabilities to ease cloud migration and hybrid cloud database management.

The software giant unveiled the new Oracle Enterprise Manager release 13.4 on Wednesday, with general availability expected by the end of the first quarter.

The release includes new analytics features for users to make the most of a single database and optimize performance. Lifecycle automation for databases gets a boost in the new release. The update also provides users with new tools to enable enterprises to migrate from an on-premises database to one in the cloud.

“Managing across hybrid on-prem and public cloud resources can be challenging in terms of planning and executing database migrations,” said Mary Johnston Turner, research vice president for cloud management at IDC. “The new Migration Workbench addresses this need by providing customers with guided support for updating and modernizing across platforms, as appropriate for the customer’s specific requirements.”

Beyond helping with migration, Turner noted that Oracle Enterprise Manager 13.4 supports customer choice by enabling consistent management across Oracle Cloud and traditional on-premises resources, which is a recognition that most enterprises are adopting multi-cloud architectures.

The other key addition in Oracle Enterprise Manager 13.4 is advanced machine learning analytics, Turner noted.

“Prior to this release the analytics capabilities were mostly limited to Oracle Management Cloud SaaS [software as a service] solutions, so adding this capability to Enterprise Manager is significant,” she said.

Oracle Enterprise Manager 13.4 features

Nearly all large Oracle customers use Enterprise Manager already, said Mughees Minhas, vice president of product management at Oracle. He said Oracle doesn’t want to force a new management tool on customers that choose to adopt the cloud, which is why the vendor is increasingly integrating cloud management features with Oracle Enterprise Manager.

Managing across hybrid on-prem and public cloud resources can be challenging in terms of planning and executing database migrations.
Mary Johnston TurnerResearch vice president for cloud management, IDC

As users decide to move data from on-premises deployments to the cloud, it’s rarely just an exercise in moving an application from one environment to another without stopping to redesign the workflow, Minhas said.

The migration tool in the new enterprise manager update includes a SQL performance analyzer feature to ensure that database operations are optimized as they move to the cloud. The tool also includes a compatibility checker to verify that on-premises database applications are compatible with the autonomous versions of Oracle database that runs in the cloud.

Migrating to new databases with Enterprise Manager 13.4

Helping organizations migrate to new database versions is one of the key capabilities of the latest version of Oracle Enterprise Manager.

“Normally, you would create a separate test system on-prem where you would install it and then once you’re done with the testing, then you’d upgrade the actual system,” Minhas said. “So we are promoting these use cases to Enterprise Manager through the use of real application testing tools, where we let you create a new database in the cloud to test.”

Intelligent analytics

The new Oracle Enterprise Manager release also benefits from Exadata Warehouse technology, which now enables analytics for Oracle database workloads.

“The goal of a great admin or cloud DBA [database administrator] is that they want to avoid problems before they happen, and not afterwards,” Minhas said. “So we are building analytical capabilities and some algorithms, so they can do some forecasting, so they know limits and are able to take action.”

Minhas said hybrid management will continue to be Oracle’s focus for Oracle Enterprise Manager.

“Over time, you’ll see us doing more use cases where we also let you do the same thing you’re doing on premises in the cloud, using the same APIs users are already familiar with,” Minhas said.

Go to Original Article
Author:

How to Use Azure Arc for Hybrid Cloud Management and Security

Azure Arc is a new hybrid cloud management option announced by Microsoft in November of 2019. This article serves as a single point of reference for all things Azure Arc.

According to Microsoft’s CEO Satya Nadella, “Azure Arc really marks the beginning of this new era of hybrid computing where there is a control plane built for multi-cloud, multi-edge” (Microsoft Ignite 2019 Keynote at 14:40). That is a strong statement from one of the industry leaders in cloud computing, especially since hybrid cloud computing has already been around for a decade. Essentially Azure Arc allows organizations to use Azure’s management technologies (“control plane”) to centrally administer public cloud resources along with on-premises servers, virtual machines, and containers. Since Microsoft Azure already manages distributed resources at scale, Microsoft is empowering its users to utilize these same features for all of their hardware, including edge servers. All of Azure’s AI, automation, compliance and security best practices are now available to manage all of their distributed cloud resources, and their underlying infrastructure, which is known as “connected machines.” Additionally, several of Azure’s AI and data services can now be deployed on-premises and centrally managed through Azure Arc, enhancing local and offline management and offering greater data sovereignty. Again, this article will provide an overview of the Azure Arc technology and its key capabilities (currently in Public Preview) and will be updated over time.

Video Preview of Azure Arc

Contents

Getting Started with Azure Arc

Azure Services with Azure Arc

Azure Artificial Intelligence (AI) with Azure Arc

Azure Automation with Azure Arc

Azure Cost Management & Billing with Azure Arc

Azure Data Services with Azure Arc

Cloud Availability with Azure Arc

Azure Availability & Resiliency with Azure Arc

Azure Backup & Restore with Azure Arc

Azure Site Recovery & Geo-Replication with Azure Arc

Cloud Management with Azure Arc

Management Tools with Azure Arc

Managing Legacy Hardware with Azure Arc

Offline Management with Azure Arc

Always Up-To-Date Tools with Azure Arc

Cloud Security & Compliance with Azure Arc

Azure Key Vault with Azure Arc

Azure Monitor with Azure Arc

Azure Policy with Azure Arc

Azure Security Center with Azure Arc

Azure Advanced Threat Protection with Azure Arc

Azure Update Management with Azure Arc

Role-Based Access Control (RBAC) with Azure Arc

DevOps and Application Management with Azure Arc

Azure Kubernetes Service (AKS) & Kubernetes App Management with Azure Arc

Other DevOps Tools with Azure Arc

DevOps On-Premises with Azure Arc

Elastic Scalability & Rapid Deployment with Azure Arc

Hybrid Cloud Integration with Azure Arc

Azure Stack Hub with Azure Arc

Azure Stack Edge with Azure Arc

Azure Stack Hyperconverged Infrastructure (HCI) with Azure Arc

Managed Service Providers (MSPs) with Azure Arc

Azure Lighthouse Integration with Azure Arc

Third-Party Integration with Azure Arc

Amazon Web Services (AWS) Integration with Azure Arc

Google Cloud Platform (GCP) Integration with Azure Arc

IBM Kubernetes Service Integration with Azure Arc

Linux VM Integration with Azure Arc

VMware Cloud Solution Integration with Azure Arc

Getting Started with Azure Arc

The Azure Arc public preview was announced in November 2019 at the Microsoft Ignite conference to much fanfare. In its initial release, many fundamental Azure services are supported along with Azure Data Services. Over time, it is expected that a majority of Azure Services will be supported by Azure Arc.

To get started with Azure Arc, check out the following guides and documentation provided by Microsoft.

Additional information will be added once it is made available.

Azure Services with Azure Arc

One of the fundamental benefits of Azure Arc is the ability to bring Azure services to a customer’s own datacenter. In its initial release, Azure Arc includes services for AI, automation, availability, billing, data, DevOps, Kubernetes management, security, and compliance. Over time, additional Azure services will be available through Azure Arc.

Azure Artificial Intelligence (AI) with Azure Arc

Azure Arc leverages Microsoft Azure’s artificial intelligence (AI) services, to power some of its advanced decision-making abilities learned from managing millions of devices at scale. Since Azure AI is continually monitoring billions of endpoints, it is able to perform tasks that can only be achieved at scale, such as identifying an emerging malware attack. Azure AI improves security, compliance, scalability and more for all cloud resources managed by Azure Arc. The services which run Azure AI are hosted in Microsoft Azure, and in disconnected environments, much of the AI processing can run on local servers using Azure Stack Edge.

For more information about Azure AI visit https://azure.microsoft.com/en-us/overview/ai-platform.

Azure Automation with Azure Arc

Azure Automation is a service provided by Azure that automates repetitive tasks which can be time-consuming or error-prone. This saves the organization significant time and money while helping them maintain operational consistency. Custom automation scripts can get triggered by a schedule or an event to automate servicing, track changes, collect inventory and much more. Since Azure Automation uses PowerShell, Python, and graphical runbooks, it can manage diverse software and hardware that supports PowerShell or has APIs. With Azure Arc, any on-premises connected machines and the applications they host can be integrated and automated with any Azure Automation workflow. These workflows can also be run locally on disconnected machines.

For more information about Azure Automation visit https://azure.microsoft.com/en-in/services/automation.

Azure Cost Management & Billing with Azure Arc

Microsoft Azure and other cloud providers use a consumption-based billing model so that tenants only pay for the resources which they consume. Azure Cost Management and Billing provides granular information to understand how cloud storage, network, memory, CPUs and any Azure services are being used. Organizations can set thresholds and get alerts when any consumer or business unit approaches or exceeds their limits. With Azure Arc, organizations can use cloud billing to optimize and manage costs for their on-premises resources also. In addition to Microsoft Azure and Microsoft hybrid cloud workloads, all Amazon AWS spending can be integrated into the same dashboard.

For more information about Azure Cost Management and Billing visit https://azure.microsoft.com/en-us/services/cost-management.

Azure Data Services with Azure Arc

Azure Data Services is the first major service provided by Azure Arc for on-premises servers. This was the top request of many organizations which want the management capabilities of Microsoft Azure, yet need to keep their data on-premises for data sovereignty. This makes Azure Data Services accessible to companies that must keep their customer’s data onsite, such as those working within regulated industries or those which do not have an Azure datacenter within their country.

In the initial release, both Azure SQL Database and Azure Database for PostgreSQL Hyperscale will be available for on-premises deployments. Now organizations can run and offer database as a service (DBaaS) as a platform as a service (PaaS) offering to their tenants. This makes it easier for users to deploy and manage cloud databases on their own infrastructure, without the overhead of setting up and maintaining the infrastructure on a physical server or virtual machine. The Azure Data Services on Azure Arc still require an underlying Kubernetes cluster, but many management frameworks are supported by Microsoft Azure and Azure Arc.

All of the other Azure Arc benefits are included with the data services, such as automation, backup, monitoring, scaling, security, patching and cost management. Additionally, Azure Data Services can run on both connected and disconnected machines. The latest features and updates to the data services are automatically pushed down from Microsoft Azure to Azure Arc members so that the infrastructure is always current and consistent.

For more information about Azure Data Services with Azure Arc visit https://azure.microsoft.com/en-us/services/azure-arc/hybrid-data-services.

Cloud Availability with Azure Arc

One of the main advantages offered by Microsoft Azure is access to its unlimited hardware spread across multiple datacenters which provide business continuity. This gives Azure customers numerous ways to increase service availability, retain more backups, and gain disaster recovery capabilities. With the introduction of Azure Arc, these features provide even greater integration between on-premises servers and Microsoft Azure.

Azure Availability & Resiliency with Azure Arc

With Azure Arc, organizations can leverage Azure’s availability and resiliency features for their on-premises servers. Virtual Machine Scale Sets allow automatic application scaling by rapidly deploying dozens (or thousands) of VMs to quickly increase the processing capabilities of a cloud application. Integrated load-balancing will distribute network traffic, and redundancy is built into the infrastructure to eliminate single points of failure. VM Availability Sets give administrators the ability to select a group of related VMs and force them to distribute themselves across different physical servers. This is recommended for redundant servers or guest clusters where it is important to have each virtualized instanced spread out so that the loss of a single host will not take down an entire service. Azure Availability Zones extend this concept across multiple datacenters by letting organizations deploy datacenter-wide protection schemes that distribute applications and their data across multiple sites. Azure’s automated updating solutions are availability-aware so they will keep services online during a patching cycle, serially updating and rebooting a subset of hosts. Azure Arc helps hybrid cloud services take advantage of all of the Azure resiliency features.

For more information about Azure availability and resiliency visit https://azure.microsoft.com/en-us/features/resiliency.

Azure Backup & Restore with Azure Arc

Many organizations limit their backup plans because of their storage constraints since it can be costly to store large amounts of data which may not need to be accessed again. Azure Backup helps organizations by allowing their data to be backed up and stored on Microsoft Azure. This usually reduces costs as users are only paying for the storage capacity they are using. Additionally storing backups offsite helps minimize data loss as offsite backups provide resiliency to site-wide outages and can protect customers from ransomware. Azure Backup also offers compression, encryption and retention policies to help organizations in regulated industries. Azure Arc manages the backups and recovery of on-premises servers with Microsoft Azure, with the backups being stored in the customer’s own datacenter or in Microsoft Azure.

For more information about Azure Backup visit https://azure.microsoft.com/en-us/services/backup.

Azure Site Recovery & Geo-Replication with Azure Arc

One of the more popular hybrid cloud features enabled with Microsoft Azure is the ability to replicate data from an on-premises location to Microsoft Azure using Azure Site Recovery (ASR). This allows users to have a disaster recovery site without needing to have a second datacenter. ASR is easy to deploy, configure, operate and even tests disaster recovery plans. Using Azure Arc it is possible to set up geo-replication to move data and services from a managed datacenter running Windows Server Hyper-V, VMware vCenter or Amazon Web Services (AWS) public cloud. Destination datacenters can include other datacenters managed by Azure Arc, Microsoft Azure and Amazon AWS.

For more information about Azure Site Recovery visit https://azure.microsoft.com/en-us/services/site-recovery.

Cloud Management with Azure Arc

Azure Arc introduces some on-premises management benefits which were previously available only in Microsoft Azure. These help organizations administer legacy hardware and disconnected machines with Azure-consistent features using multiple management tools.

Management Tools with Azure Arc

One of the fundamental design concepts of Microsoft Azure is to have centralized management layers (“planes”) that support diverse hardware, data, and administrative tools. The fabric plane controls the hardware through a standard set of interfaces and APIs. The data plane allows unified management of structured and unstructured data. And the control plane offers centralized management through various interfaces, including the GUI-based Azure Portal, Azure PowerShell, and other APIs. Each of these layers interfaces with each other through a standard set of controls, so that the operational steps will be identical whether a user deploys a VM via the Azure Portal or via Azure PowerShell. Azure Arc can manage cloud resources with the following Azure Developer Tools:

At the time of this writing, the documentation for Azure Arc is not yet available, but some examples can be found in the quick start guides which are linked in the Getting Started with Azure Arc section.

Managing Legacy Hardware with Azure Arc

Azure Arc is hardware-agnostic, allowing Azure to manage a customer’s diverse or legacy hardware just like an Azure datacenter server. The hardware must meet certain requirements so that a virtualized Kubernetes cluster can be deployed on it, as Azure services run on this virtualized infrastructure. In the Public Preview, servers must be running Windows Server 2012 R2 (or newer) or Ubuntu 16.04 or 18.04. Over time, additional servers will be supported, with rumors of 32-bit (x86), Oracle and Linux hosts being supported as infrastructure servers.

Offline Management with Azure Arc

Azure Arc will even be able to manage servers that are not regularly connected to the Internet, as is common with the military, emergency services, and sea vessels. Azure Arc has a concept of “connected” and “disconnected” machines. Connected servers have an Azure Resource ID and are part of an Azure Resource group. If a server does not sync with Microsoft Azure every 5 minutes, it is considered disconnected yet it can continue to run its local resources. Microsoft Arc allows these organizations to use the latest Azure services when they are connected, yet still use many of these features if the servers do not maintain an active connection, including Azure Data Services. Even some services which run Azure AI and are hosted in Microsoft Azure can work disconnected environments while running on Azure Stack Edge.

Always Up-To-Date Tools with Azure Arc

One of the advantages of using Microsoft Azure is that all the services are kept current by Microsoft. The latest features, best practices, and AI learning are automatically available to all users in real-time as soon as they are released. When an admin logs into the Azure Portal through a web browser, they are immediately exposed to the latest technology to manage their distributed infrastructure. By ensuring that all users have the same management interface and APIs, Microsoft can guarantee consistency of behavior for all users across all hardware, including on-premises infrastructure when using Azure Arc. However, if the hardware is in a disconnected environment (such as on a sea vessel), there could be some configuration drift as older versions of Azure data services and Azure management tools may still be used until they are reconnected and synced.

Cloud Security & Compliance with Azure Arc

Public cloud services like Microsoft Azure are able to offer industry-leading security and compliance due to their scale and expertise. Microsoft employs more than 3,500 of the world’s leading security engineers who have been collaborating for decades to build the industry’s safest infrastructure. Through its billions of endpoints, Microsoft Azure leverages Azure AI to identify anomalies and detect threats before they become widespread. Azure Arc extends all of the security features offered in Microsoft Azure to on-premises infrastructure, including key vaults, monitoring, policies, security, threat protection, and update management.

Azure Key Vault with Azure Arc

When working in a distributed computing environment, managing credentials, passwords, and user access can become complicated. Azure Key Vault is a service that helps enhance data protection and compliance by securely protecting all keys and monitoring access. Azure Key Vault is supported by Azure Arc, allowing credentials for on-premises services and hybrid clouds to be centrally managed through Azure.

For more information about Azure Key Vault visit https://azure.microsoft.com/en-us/services/key-vault.

Azure Monitor with Azure Arc

Azure Monitor is a service that collects and analyzes telemetry data from Azure infrastructure, networks, and applications. The logs from managed services are sent to Azure Monitor where they are aggregated and analyzed. If a problem is identified, such as an offline server, it can trigger alerts or use Azure Automation to launch recovery workflows. Azure Arc can now monitor on-premises servers, networks, virtualization infrastructure, and applications, just like they were running in Azure. It even leverages Azure AI and Azure Automation to make recommendations and fixes to hybrid cloud infrastructure.

For more information about Azure Monitor visit https://azure.microsoft.com/en-us/services/monitor.

Azure Policy with Azure Arc

Most enterprises have certain compliance requirements for the IT infrastructure, especially those organizations within regulated industries. Azure Policy uses Microsoft Azure to audit an environment and aggregate all the compliance data into a single location. Administrators can get alerted about misconfigurations or configuration drifts and even trigger automated remediation using Azure Automation. Azure Policy can be used with Azure Arc to apply policies on all connect machines, providing the benefits of cloud compliance to on-premises infrastructure.

For more information about Azure Policy visit https://azure.microsoft.com/en-us/services/azure-policy.

Azure Security Center with Azure Arc

The Azure Security Center centralizes all security policies and protects the entire managed environment. When Security Center is enabled, the Azure monitoring agents will report data back from the servers, networks, virtual machines, databases, and applications. The Azure Security Center analytics engines will ingest the data and use AI to provide guidance. It will recommend a broad set of improvements to enhance security, such as closing unnecessary ports or encrypting disks. Perhaps most importantly it will scan all the managed servers and identify updates that are missing, and it can use Azure Automation and Azure Update Management to patch those vulnerabilities. Azure Arc extends these security features to connected machines and services to protect all registered resources.

For more information about Azure Security Center visit https://azure.microsoft.com/en-us/services/security-center

Azure Advanced Threat Protection with Azure Arc

Azure Advanced Threat Protection (ATP) helps the industry’s leading cloud security solution by looking for anomalies and potential attacks with Azure AI. Azure ATP will look for suspicious computer or user activities and report any alerts in real-time. Azure Arc lets organizations extend this cloud protect to their hybrid and on-premises infrastructure offering leading threat protection across all of their cloud resources.

For more information about Azure Advanced Threat Protection visit https://azure.microsoft.com/en-us/features/azure-advanced-threat-protection.

Azure Update Management with Azure Arc

Microsoft Azure automates the process of applying patches, updates and security hotfixes to the cloud resources it manages. With Update Management, a series of updates can be scheduled and deployed on non-compliant servers using Azure Automation. Update management is aware of clusters and availability sets, ensuring that a distributed workload remains online while its infrastructure is patched by live migrating running VMs or containers between hosts. Azure will centrally manage updates, assessment reports, deployment results, and can create alerts for failures or other conditions. Organizations can use Azure Arc to automatically analyze and patch their on-premises and connected servers, virtual machines, and applications.

For more information about Azure Update Management visit https://docs.microsoft.com/en-us/azure/automation/automation-update-management.

Role-Based Access Control (RBAC) with Azure Arc

Controlling access to different resources is a critical function for any organization to enforce security and compliance. Microsoft Azure Active Directory (Azure AD) allows its customers to define granular access control for every user or user role based on different types of permissions (read, modify, delete, copy, sharing, etc.). There are also over 70 user roles provided by Azure, such as a Global Administrator, Virtual Machine Contributor or Billing Administrator. Azure Arc lets businesses extend role-based access control (RBAC) managed by Azure to on-premises environments. This means that any groups, policies, settings, security principals and managed identities that were deployed by Azure AD can now access all managed cloud resources. Azure AD also provides auditing so it is easy to track any changes made by users or security principals across the hybrid cloud.

For more information about Role-Based Access Control visit https://docs.microsoft.com/en-us/azure/role-based-access-control/overview.

DevOps and Application Management with Azure Arc

Over the past few years, containers have become more commonplace as they provide certain advantages over VMs, allowing the virtualized applications and services to be abstracted from their underlying virtualized infrastructure. This means that containerized applications can be uniformly deployed anywhere with any tools so that users do not have to worry about the hardware configuration. This technology has become popular amongst application developers, enabling them to manage their entire application development lifecycle without having a dependency on the IT department to set up the physical or virtual infrastructure. This development methodology is often called DevOps. One of the key design requirements with Azure Arc was to make it hardware agnostic, so with Azure Arc, developers can manage their containerized applications the same way whether they are running in Azure, on-premises or in a hybrid configuration.

Azure Kubernetes Service (AKS) & Kubernetes App Management with Azure Arc

Kubernetes is a management tool that allows developers to deploy, manage and update their containers. Azure Kubernetes Service (AKS) is Microsoft’s Kubernetes service and this can be integrated with Azure Arc. This means that AKS can be used to manage on-premises servers running containers. In additional to Azure Kubernetes Service, Azure Arc can be integrated with other Kubernetes management platforms, including Amazon EKS, Google Kubernetes Engine, and IBM Kubernetes Service.

For more information about Azure Container Services visit https://azure.microsoft.com/en-us/product-categories/containers and for Azure Kubernetes Services (AKS) visit https://azure.microsoft.com/en-us/services/kubernetes-service.

Other DevOps Tools with Azure Arc

For container management on Azure Arc developers can use any of the common Kubernetes management platforms, including Azure Kubernetes Service, Amazon EKS, Google Kubernetes Engine, and IBM Kubernetes Service. All standard deployment and management operations are supported on Azure Arc hardware enabling cross-cloud management.

More information about the non-Azure management tools is provided on the section on Third-Party Management Tools.

DevOps On-Premises with Azure Arc

Many developers prefer to work on their own hardware and some are required to develop applications in a private environment to keep their data secure. Azure Arc allows developers to build and deploy their applications anywhere utilizing Azure’s cloud-based AI, security and other cloud features while retaining their data, IP or other valuable assets within their own private cloud. Additionally, Azure Active Directory can use role-based access control (RBAC) and Azure Policies to manage developer access to sensitive company resources.

Elastic Scalability & Rapid Deployment with Azure Arc

Containerized applications are designed to start quickly when running on a highly-available Kubernetes cluster. The app will bypass the underlying operating system, allowing it to be rapidly deployed and scaled. These applications can quickly grow to an unlimited capacity when deployed on Microsoft Azure. When using Azure Arc, the applications can be managed across public and private clouds. Applications will usually contain several containers types that can be deployed in different locations based on their requirements. A common deployment configuration for a two-tiered application is to deploy the web frontend on Microsoft Azure for scalability and the database in a secure private cloud backend.

Hybrid Cloud Integration with Azure Arc

Microsoft’s hybrid cloud initiatives over the past few years have included certifying on-premises software and hardware configurations known as Azure Stack. Azure Stack allows organizations to run Azure-like services on their own hardware in their datacenter. It allows organizations that may be restricted from using public cloud services to utilize the best parts of Azure within their own datacenter. Azure Stack is most commonly deployed by organizations that have requirements to keep their customer’s datacenter inhouse (or within their territory) for data sovereignty, making it popular for customers who could not adopt the Microsoft Azure public cloud. Azure Arc easily integrates with Azure Stack Hub, Azure Stack Edge, and all the Azure Stack HCI configurations, allowing these services to be managed from Azure.

For more information about Azure Stack visit https://azure.microsoft.com/en-us/overview/azure-stack.

Azure Stack Hub with Azure Arc

Azure Stack Hub (formerly Microsoft Azure Stack) offers organizations a way to run Azure services from their own datacenter, from a service provider’s site, or from within an isolated environment. This cloud platform allows users to deploy Windows VMs, Linux VMs and Kubernetes containers on hardware which they operate. This offering is popular with developers who want to run services locally, organizations which need to retain their customer’s data onsite, and groups which are regularly disconnected from the Internet, as is common with sea vessels or emergency response personnel. Azure Arc allows Azure Stack Hub nodes to run supported Azure services (like Azure Data Services) while being centrally managed and optimized via Azure. These applications can be distributed across public, private or hybrid clouds.

For more information about Azure Stack Hub visit https://docs.microsoft.com/en-us/azure-stack/user/?view=azs-1908.

Azure Stack Edge with Azure Arc

Azure Stack Edge (previously Azure Data Box Edge) is a virtual appliance which can run on any hardware in a datacenter, branch office, remote site or disconnected environment. It is designed to run edge computing workloads on Hyper-V VMs, VMware VMs, containers and Azure services. These edge servers will be optimized run IoT, AI and business workloads so that processing can happen onsite, rather than being sent across a network to a cloud datacenter for processing. When the Azure Stack Edge appliance is (re)connected to the network it transfers any data at high-speed, and data use can be optimized to run during off-hours. It supports machine learning capabilities through GPGA or GPU. Azure Arc can centrally manage Azure Stack Edge, its virtual appliances and physical hardware.

For more information about Azure Stack Edge visit https://azure.microsoft.com/en-us/services/databox/edge.

Azure Stack Hyperconverged Infrastructure (HCI) with Azure Arc

Azure Stack Hyperconverged Infrastructure (HCI) is a program which provides preconfigured hyperconverged hardware from validated OEM partners which are optimized to run Azure Stack. For businesses which want to run Azure-like services on-premises they can purchase or rent hardware which has been standardized to Microsoft’s requirements. VMs, containers, Azure services, AI, IOT and more can run consistency on the Microsoft Azure public cloud or Azure Stack HCI hardware in a datacenter. Cloud services can be distributed across multiple datacenters or clouds and centrally managed using Azure Arc.

For more information about Azure Stack HCI visit https://azure.microsoft.com/en-us/overview/azure-stack/hci.

Managed Service Providers (MSPs) with Azure Arc

Azure Lighthouse Integration with Azure Arc

Azure Lighthouse is a technology designed for managed service providers (MSPs), ISVs or distributed organizations which need to centrally manage their tenants’ resources. Azure Lighthouse allows service providers and tenants to create a two-way trust to allow unified management of cloud resources. Tenants will grant specific permissions for approved user roles on particular cloud resources, so that they can offload the management to their service provider. Now service providers can add their tenants’ private cloud environments under Azure Arc management, so that they can take advantage of the new capabilities which Azure Arc provides.

For more information about Azure Lighthouse visit https://azure.microsoft.com/en-us/services/azure-lighthouse or on the Altaro MSP Dojo.

Third-Party Integration with Azure Arc

Within the Azure management layer (control plane) exists Azure Resource Manager (ARM). ARM provides a way to easily create, manage, monitor and delete any Azure resource. Every native and third-party Azure resource uses ARM to ensure that it can be centrally managed through Azure management tools. Azure Arc now allows non-Azure resources to be managed by Azure. This can include third-party clouds (Amazon Web Services, Google Cloud Platform), Windows and Linux VMs, VMs on non-Microsoft hypervisors (VMware vSphere, Google Compute Engine, Amazon EC2), Kubernetes containers and clusters (including IMB Kubernetes Service, Google Kubernetes Engine and Amazon EKS). At the time of this writing limited information is available about third-party integration, but it will be added over time.

Amazon Web Services (AWS) Integration with Azure Arc

Amazon Web Services (AWS) is Amazon’s public cloud platform. Some services from AWS can be managed by Azure Arc. This includes operating virtual machines running on the Amazon Elastic Compute Cloud (EC2) and containers running on Amazon Elastic Kubernetes Service (EKS). Azure Arc also lets an AWS site be used as a geo-replicated disaster recovery location. AWS billing can also be integrated with Azure Cost Management & Billing so that expenses from both cloud providers can be viewed in a single location.

Additional information will be added once it is made available.

Google Cloud Platform (GCP) Integration with Azure Arc

Google Cloud Platform (GCP) is Google’s public cloud platform. Some services from GCP can be managed by Azure Arc. This includes operating virtual machines running on Google Compute Engine (GCE) and containers running on Google Kubernetes Engine (GKE).

Additional information will be added once it is made available.

IBM Kubernetes Service Integration with Azure Arc

IBM Cloud is IBM’s public cloud platform. Some services from IBM Cloud can be managed by Azure Arc. This includes operating containers running on IBM Kubernetes Service (Kube).

Additional information will be added once it is made available.

Linux VM Integration with Azure Arc

In 2014 Microsoft’s CEO Satya Nadella declared, “Microsoft loves Linux”. Since then the company has embraced Linux integration, making Linux a first-class citizen in its ecosystem. Microsoft even contributes code to the Linux kernel so that it operates efficiently when running as a VM or container on Microsoft’s operating systems. Virtually all management features for Windows VMs are available to supported Linux distributions, and this extends to Azure Arc. Azure Arc admins can use Azure to centrally create, manage and optimize Linux VMs running on-premises, just like any standard Windows VM.

VMware Cloud Solution Integration with Azure Arc

VMware offers a popular virtualization platform and management studio (vSphere) which runs on VMware’s hypervisor. Microsoft has acknowledged that many customers are running legacy on-premises hardware are using VMware, so they provide numerous integration points to Azure and Azure Arc. Organizations can even virtualize and deploy their entire VMware infrastructure on Azure, rather than in their own datacenter. Microsoft makes it easy to deploy, manage, monitor and migrate VMware system and with Azure Arc businesses can now centrally operate their on-premises VMware infrastructure too. While the full management functionality of VMware vSphere is not available through Azure Arc, most standard operations are supported.

For more information about VMware Management with Azure visit https://azure.microsoft.com/en-us/overview/azure-vmware.


Go to Original Article
Author: Symon Perriman

How to manage Exchange hybrid mail flow rules

An Exchange hybrid deployment generally provides a good experience for the administrator, but it can be found lacking in a few areas, such as transport rules.

Transport rules — also called mail flow rules — identify and take actions on all messages as they move through the transport stack on the Exchange servers. Exchange hybrid mail flow rules can be tricky to set up properly to ensure all email is reviewed, no matter if mailboxes are on premises or in Exchange Online in the cloud.

Transport rules solve many compliance-based problems that arise in a corporate message deployment. They add disclaimers or signatures to messages. They funnel messages that meet specific criteria for approval before they leave your control. They trigger encryption or other protections. It’s important to understand how Exchange hybrid mail flow rules operate when your organization runs a mixed environment.

Mail flow rules and Exchange hybrid setups

The power of transport rules stems from their consistency. For an organization with compliance requirements, transport rules are a reliable way to control all messages that meet defined criteria. Once you develop a transport rule for certain messages, there is some comfort in knowing that a transport rule will evaluate every email. At least, that is the case when your organization is only on premises or only in Office 365.

Things change when your organization moves to a hybrid Exchange configuration. While mail flow rules evaluate every message that passes through the transport stack, that does not mean that on-premises transport rules will continue to evaluate messages sent to or from mailboxes housed in Office 365 and vice versa.

No two organizations are alike, which means there is more than one resolution for working with Exchange hybrid mail flow rules.

Depending on your routing configuration, email may go from an Exchange Online mailbox and out of your environment without an evaluation by the on-premises transport rules. It’s also possible that both the mail flow rules on premises and the other set of mail flow rules in Office 365 will assess every email, which may cause more problems than not having any messages evaluated.

To avoid trouble, you need to consider the use of transport rules both for on-premises and for online mailboxes and understand how the message routing configuration within your hybrid environment will affect how Exchange applies those mail flow rules.

Message routing in Exchange hybrid deployments

A move to an Exchange hybrid deployment requires two sets of transport rules. Your organization needs to decide which mail flow rules will be active in which environment and how the message routing configuration you choose affects those transport rules.

All message traffic that passes through an Exchange deployment will be evaluated by the transport rules in that environment, but the catch is that an Exchange hybrid deployment consists of two different environments, at least when they relate to transport rules. A message sent from an on-premises mailbox to another on-premises mailbox generally won’t pass though the transport stack, and, thus, the mail flow rules, in Exchange Online. The opposite is also true: Messages sent from an online mailbox to another online mailbox in the same tenant will not generally pass though the on-premises transport rules. Copying the mail flow rules from your on-premises Exchange organization into your Exchange Online tenant does not solve this problem, but that can lead to some messages being handled by the same transport rule twice.

When you configure an Exchange hybrid deployment, you need to decide where your mail exchange (MX) record points. Some organizations choose to have the MX record point to the existing on-premises Exchange servers and then route message traffic to mailboxes in Exchange Online via a send connector. Other organizations choose to have the MX record point to Office 365 and then flow to the on-premises servers.

There are more decisions to be made about the way email leaves your organization as well. By default, an email sent from an Exchange Online mailbox to an external recipient will exit Office 365 directly to the internet without passing through the on-premises Exchange servers. This means that transport rules, which are intended to evaluate email traffic before it leaves your organization, may never have that opportunity.

Exchange hybrid mail flow rules differ for each organization

No two organizations are alike, which means there is more than one resolution for working with Exchange hybrid mail flow rules.

For organizations that want to copy transport rules from on-premises Exchange Server into Exchange Online, you can use PowerShell. The Export-TransportRuleCollection PowerShell cmdlet works on all currently supported versions of on-premises Exchange Server. This cmdlet creates an XML file that you can load into your Exchange Online tenant with another cmdlet called Import-TransportRuleCollection. This is a good first step to ensure all mail flow rules are the same in both environments, but that’s just part of the work.

Transport rules, like all Exchange Server features, have evolved over time. They may not work the same in all supported versions of on-premises Exchange Server and Exchange Online. Simply exporting and importing your transport rules may cause unexpected behavior.

One way to resolve this is to duplicate transport rules in both environments by adding two more transport rules on each side. The first new transport rule checks the message header and tells the transport stack — both on premises and in the cloud — that the message has already been though the transport rules in the other environment. This rule should include a statement to stop processing any further transport rules. A second new transport rule should add to the header with an indication that the message has already been though the transport rules in one environment. This is a difficult setup to get right and requires a good deal of care to implement properly if you choose to go this route.

I expect that the fairly new hybrid organization transfer feature of the Hybrid Configuration Wizard will eventually handle the export and import of transport rules, but that won’t solve the routing issues or the issues with running duplicate rules.

Go to Original Article
Author:

What’s new with the Exchange hybrid configuration wizard?

Exchange continues to serve as the on-ramp into Office 365 for many organizations. One big reason is the hybrid capabilities that connect on-premises Exchange and Exchange Online.

If you use Exchange Server, it’s not difficult to join it to Exchange Online for a seamless transition into the cloud. Microsoft refined the Exchange hybrid configuration wizard to remove a lot of the technical hurdles to shift one of the more important IT workloads into Exchange Online. If you haven’t seen the Exchange hybrid experience recently, you may be surprised about some of the improvements over the last few years.

Exchange hybrid setups have come a long way

I started configuring Exchange hybrid deployments the first week Microsoft made Office 365 publicly available in June 2011 with the newest version of Exchange at the time, Exchange 2010. Setting up an Exchange hybrid deployment was a laborious task. Microsoft provided a 75-page document with the Exchange hybrid configuration steps, which would take about three workdays to complete. Then I could start the troubleshooting process to fix the innumerable typos I made during the setup.

In December 2011, Microsoft released Exchange 2010 Service Pack 2, which included the Exchange hybrid configuration wizard. The wizard reduced that 75-page document to a few screens of information that cut down the work from three days to about 15 minutes. The Exchange hybrid configuration wizard did not solve all the problems of an Exchange hybrid deployment, but it made things a lot easier.

What the Exchange hybrid configuration wizard does

The Exchange hybrid configuration wizard is just a PowerShell script that runs all the necessary configuration tasks. The original hybrid configuration wizard completed seven key tasks:

  1. verified prerequisites for a hybrid deployment;
  2. configured Exchange federation trust;
  3. configured relationships between on-premises Exchange and Exchange Online;
  4. configured email address policies;
  5. configured free/busy calendar sharing;
  6. configured secure mail flow between the on-premises and Exchange Online organizations; and
  7. enabled support for Exchange Online archiving.

How the Exchange hybrid configuration wizard evolved

Since the initial release of the Exchange hybrid configuration wizard, Microsoft expanded its capabilities in multiple ways with several major improvements over the last few years.

Since the initial release of the Exchange hybrid configuration wizard, Microsoft expanded its capabilities in multiple ways with several major improvements over the last few years.

Exchange hybrid configuration wizard decoupled from service pack updates: This may seem like a minor change, but it’s a significant development. Having the Exchange hybrid configuration wizard as part of the standard Exchange update cycle meant that any updates to the wizard had to wait until the next service pack update.

Now the Exchange hybrid configuration wizard is an independent component from Exchange Server. When you run the wizard, it checks for a new release and updates itself to the most current configuration. This means you get fixes or additional features without waiting through that quarterly update cycle.

Minimal hybrid configuration: Not every migration has the same requirements. Sometimes a quicker migration with fewer moving parts is needed, and Microsoft offered an update in 2016 for a minimal hybrid configuration feature for those scenarios.

The minimal hybrid configuration helps organizations that cannot use the staged migration option, but want an easy switchover without worrying about configuring extras, such has the free/busy federation in calendar availability.

The minimal hybrid configuration leaves out the following functionality from a full hybrid configuration:

  • cross-premises free/busy calendar availability;
  • Transport Layer Security secured mail flow between on-premises Exchange and Exchange Online;
  • cross-premises eDiscovery;
  • automatic Outlook on the web (OWA) and ActiveSync redirection for migrated users; and
  • automatic retention for archived mailboxes.

If these features aren’t important to your organization and speed is of the essence, the minimal hybrid configuration is a good option.

Recent update goes further with setup work

Microsoft designed the Exchange hybrid configuration wizard to migrate mailboxes without interrupting the end user’s ability to work. The wizard gives users a full global address book, free/busy calendar availability and some of the mailbox delegation features used with an on-premises Exchange deployment.

A major new addition to the hybrid configuration wizard its ability to transfer some of the on-premises Exchange configurations to the Exchange Online tenant. The Hybrid Organization Configuration Transfer feature pulls configuration settings from your Exchange organization and does a one-time setup of the same settings in your Exchange Online tenant.

Microsoft expanded the abilities of Hybrid Organization Configuration Transfer in November 2018 so it configures the following settings: Active Sync Mailbox Policy, Mobile Device Mailbox Policy, OWA Mailbox Policy, Retention Policy, Retention Policy Tag, Active Sync Device Access Rule, Active Sync Organization Settings, Address List, DLP Policy, Malware Filter Policy, Organization Config and Policy Tip Configuration.

The Exchange hybrid configuration wizard only handles these settings once. If you make changes in your on-premises Exchange organization after you run the Exchange hybrid configuration wizard, those changes will not be replicated in the cloud automatically.

Go to Original Article
Author:

For Sale – 500gb and 1tb Seagate 2.5″ hard drives

Seagate’s third generation SSHDs (solid state hybrid drives), now for both laptops and desktops, are marketed as a replacement for HDDs and serve as a good option for those otherwise considering an SSD. SSHDs aim to offer users the price-point and robust capacity of HDDs while also utilizing…

Go to Original Article
Author:

Dremio Data Lake Engine 4.0 accelerates query performance

Dremio is advancing its technology with a new release that supports AWS, Azure and hybrid cloud deployments, providing what the vendor refers to as a Data Lake Engine.

The Dremio Data Lake Engine 4.0 platform is rooted in multiple open source projects, including Apache Arrow, and offers the promise of accelerated query performance for data lake storage.

Dremio made the platform generally available on Sept. 17. The Dremio Data Lake Engine 4.0 update introduces a feature called column-aware predictive pipelining that helps predict access patterns, which makes queries faster. The new Columnar Cloud Cache (C3) feature in Dremio also boosts performance by caching data closer to where compute execution occurs.

For IDC analyst Stewart Bond, the big shift in the Dremio 4.0 update is how the data lake engine vendor has defined its offering as a “Data Lake Engine” focused on AWS and Azure.

In some ways, Dremio had previously struggled to define what its technology actually does, Bond said. In the past, Dremio had been considered a data preparation tool, a data virtualization tool and even a data integration tool, he said. It does all those things, but in ways, and with data, that differ markedly from traditional technologies in the data integration software market.

“Dremio offers a semantic layer, query and acceleration engine over top of object store data in AWS S3 or Azure, plus it can also integrate with more traditional relational database technologies,” Bond said. “This negates the need to move data out of object stores and into a data warehouse to do analytics and reporting.”

For data in a data lake to be valuable, it typically needs to be extracted, refined and delivered to data warehouses, analytics, machine learning, or operational applications where it can also be transformed into something different when blended with other data ingredients.
Stewart BondAnalyst, IDC

Simply having a data lake doesn’t do much for an organization. A data lake is just data, and just as with natural lakes, water needs to be extracted, refined and delivered for consumption, Bond said.

“For data in a data lake to be valuable, it typically needs to be extracted, refined and delivered to data warehouses, analytics, machine learning or operational applications where it can also be transformed into something different when blended with other data ingredients,” Bond said. “Dremio provides organizations with the opportunity to get value out of data in a data lake without having to move the data into another repository, and can offer the ability to blend it with data from other sources for new insights.”

How Dremio Data Lake Engine 4.0 works

Organizations use technologies like ETL (extract, transform, load), among other things, to move data from data lake storage into a data warehouse because they can’t query the data fast enough where it is, said Tomer Shiran, co-founder and CTO of Dremio. That performance challenge is one of the drivers behind the C3 feature in Dremio 4.

“With C3 what we’ve developed is a patent pending real-time distributed cache that takes advantage of the NVMe devices that are on the instances that we’re running on to automatically cache data from S3,” Shiran explained. “So when the query engine is accessing a piece of data for the second time, it’s at least 10 times faster than getting it directly from S3.”

Screenshot of Dremio data lake architecture
Dremio data lake architecture

The new column-aware predictive pipelining feature in Dremio Data Lake Engine 4.0 further accelerates query performance for the initial access. The features increases data read throughput to the maximum that is allowed on a given network, Shiran explained.

While Dremio is positioning its technology as a data lake engine that can be used to query data stored in a data lake, Shiran noted that the platform also has data virtualization capabilities. With data virtualization, pointers or links to sources of data enables creating a logical data layer.

Apache Arrow

One of the foundational technologies that enables the Dremio Data Lake Engine is the open source Apache Arrow project, which Shiran helped to create.

“We took the internal memory format of Dremio, and we open sourced that as Apache Arrow, with the idea that we wanted our memory format to be an industry standard,” Shiran said.

Arrow has become increasingly popular over the past three years and is now used by many different tools, including Apache Spark.

With the growing use of Arrow, Dremio’s goal is to make communications between its platform and other tools that use Arrow as fast as possible. Among the ways that Dremio is helping to make Arrow faster is with the Gandiva effort that is now built into Dremio 4, according to the vendor. Gandiva is an execution kernel that is based on the LLVM compiler, enabling real-time code compilation to accelerate queries.

Dremio will continue to work on improving performance, Shiran said.

“At the end of the day, customers want to see more and more performance, and more data sources,” he said. “We’re also making it more self-service for users, so for us we’re always looking to reduce friction and the barriers.”

Go to Original Article
Author:

VMware Cloud on AWS migrations continue to pose challenges

Hybrid cloud solutions provider Unitas Global said it is expecting an uptick in VMware Cloud on AWS migrations ahead, but noted migrations continue to pose problems.

According to the Los Angeles-based company, which provides cloud infrastructure, managed services and connectivity services, VMware Cloud on AWS has gained traction among enterprise clients with extensive VMware-based legacy infrastructure. Those legacy environments in the past have proved difficult to migrate, but VMware Cloud on AWS has smoothed the journey.

“[VMware Cloud on AWS] has given us a path to migrating legacy environments to cloud with less friction,” said Grant Kirkwood, CTO at Unitas Global.

VMware Cloud on AWS has also drummed up customer interest for its disaster recovery capabilities, which can provide significant cost reductions compared with traditional enterprise DR infrastructure.  “We are seeing a lot of interest in this particular use case,” he said.

Despite its benefits, however, Kirkwood has found that VMware Cloud on AWS migrations can be problematic for some customers. The biggest challenge usually stems from enterprises’ often complexly interwoven environments. As enterprise environments evolve, they tend to amass lots of hidden dependencies, which can break during cloud migrations, he said. “So no matter how much planning you seem to do, you pick up a database or middleware application and migrate it to the cloud, and [then] five other downstream [apps] break because they were dependent on that and it wasn’t known,” Kirkwood said.

A report from Faction, a Denver-based multi-cloud managed service provider, cited cost management (51%) as the top VMware Cloud on AWS usage challenge, followed by network complexity (37%) and AWS prerequisites (27%). Faction’s report, published in August, was based on a survey of 1,156 IT and business professionals.

VMware poised for multi-cloud opportunities

While enterprise multi-cloud adoption remains in its early stages, Kirkwood said VMware has been successfully redeveloping its portfolio for when it matures.

Each of the leading public cloud providers are trying to differentiate themselves based on their unique capabilities and services, he said. For the most part, enterprise customers today haven’t even scratched the surface of Google, AWS and Microsoft’s rapidly expanding menus of services. As enterprises gradually embrace more public cloud services, “being able to leverage all of them across a common data set [will be valuable] for companies that are sophisticated enough to take advantage of that,” he said.

VMware Cloud on AWS chart

According to Kirkwood, Google Cloud Platform (GCP) excels in AI and machine learning tooling that can be applied to large data sets. GCP is also “very competitive in large-scale storage,” he noted. Meanwhile, AWS has developed powerful analytics and behavioral tooling. Microsoft, though it “has probably the least sophisticated offerings,” provides “the path of least resistance for Microsoft-centric workloads.”

“What I think is going to be interesting to watch is how VMware adapts what they are doing to provide value across that much broader spectrum of [public cloud] services as they gain popularity,” he said.

Other news

  • Insight Enterprises, a technology solutions and services provider based in Tempe, Ariz., has completed its acquisition of PCM Inc., a provider of IT products and services. The deal expands Insight’s reach into the mid-market, especially in North America, and adds more than 2,700 salespeople, technical architects, engineers, consultants and service delivery personnel, according to the company.
  • Iland, a hosted cloud, backup and disaster recovery services provider, said it is reaching “a broader audience of enterprise customers” through a growing network of resellers and managed services providers. SMBs had been the traditional customer set for the company’s VMware-based offerings. The Houston-based company also said it has expanded its channel program. The program provides a partner portal for training, certification and sales management; a new data center in Canada for regional partners in North America; and an updated Catalyst cloud assessment tool.
  • MSP software vendor ConnectWise launched an organization that aims to boost cybersecurity among channel partners. The Technology Solution Provider Information Sharing and Analysis Organization, or TSP-ISAO, offers its members access to threat intelligence, cybersecurity best practices, and other tools and resources.
  • Accenture disclosed two acquisitions this week. The company acquired Northstream, a consulting firm in Stockholm that works with communications service providers and networking services vendors, and Fairway Technologies, an engineering services provider with offices in San Diego; Irvine, Calif.; and Austin, Texas.
  • Ensono, a hybrid IT services provider, launched a managed services offering for VMware Cloud on AWS and said it has achieved a VMware Cloud on AWS Solution Competency.
  • Sparkhound, a digital solutions firm, said its digital transformation project at paving company Pavecon involved Microsoft Office 365, SharePoint, Azure SQL Database and Active Directory. The project also drew upon Power BI for business analytics and PowerApps for creating mobile apps on Android, iOS and Windows, according to the company.
  • US Signal, a data center services provider based in Grand Rapids, Mich., unveiled its managed Website and Application Security Solution. The offering builds upon the company’s partnership with Cloudflare, an internet security company, according to US Signal. The managed website and application security offering provides protection against DDoS, ransomware, malicious bots and application layer attacks, the company said.
  • Cloud communications vendor CoreDial rolled out its CoreNexa Contact Center Certification Program. The program offers free sales and technical training on the vendor’s contact center platform.
  • Security vendor Kaspersky revealed that more than 2,000 companies have joined its global MSP program. Kaspersky launched its MSP program in 2017.
  • Service Express, a third-party maintenance provider based in Grand Rapids, Mich., has opened an office in the Washington, D.C., area. The company specializes in post-warranty server, storage and network support.

Market Share is a news roundup published every Friday.

Go to Original Article
Author:

IBM Cloud Paks open new business for channel partners

IBM said its latest push into the hybrid and multi-cloud market is setting the stage for new channel opportunities. 

The company this week revealed IBM Cloud Paks, a new set of IBM software offerings containerized on Red Hat OpenShift. According to IBM, Cloud Paks aim to help customers migrate, integrate and modernize applications in multiple-cloud environments.

Those environments include public clouds such as AWS, Azure, Google Cloud Platform, Alibaba and IBM Cloud, as well as private clouds. The Cloud Paks launch follows on the heels of IBM closing its $34 billion Red Hat acquisition in July and is part of a broader strategy to make its software portfolio cloud-native and OpenShift-enabled.

“The strategy has been to containerize the middleware on a common Kubernetes platform. That common Kubernetes platform is Red Hat OpenShift,” said Brian Fallon, director of worldwide digital and partner ecosystems, IBM Cloud and cognitive software.

For IBM business partners, Cloud Paks offer a modular approach to solving common problems faced by customers in their journeys to cloud, he said. The company released five Cloud Pak products, addressing data virtualization; application development; integration of applications, data, cloud services and APIs; process automation; and multi-cloud management.

IBM Cloud Paks can be mixed and matched to address different customer scenarios. For example, the Cloud Pak for Applications and Cloud Pak for Integration “together represent a great opportunity for partners to help their clients move and modernize workloads to a cloud environment,” Fallon said.

Dorothy Copeland, global vice president of programs and business development for the IBM partner ecosystem, said IBM’s push into hybrid and multi-cloud products is creating new opportunities for IBM and Red Hat partners, respectively.

“We are enabling the market to drive hybrid, multi-cloud solutions and, in that, enabling our business partners to be able to do that, as well. … There is a huge opportunity for partners, especially around areas where partners can add specific knowledge, services and management of the deployment,” she said.

Cloud Paks may also serve as an entry point for Red Hat partners to build IBM practices. Copeland noted that Red Hat partners have shown increasing interest in joining the IBM partner ecosystem since the acquisition was announced. IBM has stated it intends Red Hat to operate independently under its ownership.

Logically acquires New York-area IT services company

Christopher Claudio, CEO at LogicallyChristopher Claudio

Logically, a managed IT services provider based in Portland, Maine, has acquired Sullivan Data Management, a 10-employee outsourced IT services firm located north of New York City.

Launched earlier this year, Logically formed from the merger of Winxnet Inc., an IT consulting and outsourcing firm in Portland, and K&R Network Solutions, a San Diego-based managed service provider (MSP). 

Christopher Claudio, Logically’s CEO, said in an April 2019 interview that the company was looking for additional acquisitions. The Sullivan Data Management deal is the first of those transactions. Claudio said two or three acquisitions may follow by the end the year.

Sullivan Data Management fits Logically’s strategy of focusing on locations outside of top-tier metropolitan areas, where MSP competition is thickest. The company is based in Yorktown Heights in Westchester County, an hour’s drive from New York. Sullivan Data Management services customers in Westchester and neighboring counties.

This is the type of acquisition we are looking for.
Christopher ClaudioCEO, Logically

“This is the type of acquisition we are looking for,” Claudio said.

He also pointed to an alignment between Logically’s vertical market focus and Sullivan Data Management’s local government customer base.

“We do a lot of work within the public sector,” Claudio said, noting the bulk of Sullivan Data Management’s clients are in the public safety and municipal management sectors.

Barracuda reports growth in email security market

Barracuda Networks said its email security business is booming, with a $200 million annual revenue run rate for fiscal year 2019, which ended Feb. 28.

For the first quarter of its fiscal year 2020, Barracuda said its email security product, Barracuda Sentinel, saw 440% year-over-year growth in sales booking. In the same time frame, its email security, backup and archiving package, Barracuda Essentials, saw 46% growth in sales bookings, the vendor said.

Meanwhile, Barracuda’s business unit dedicated to MSPs reported the annual recurring revenue for its email protection business increased 122% year over year for the first quarter of fiscal year 2020.

Ezra Hookano, vice president of channels at Barracuda, based in Campbell, Calif., cited conditions of the email security market as one driver behind the company’s growth. Phishing attacks have become more sophisticated in their social-engineering tactics. Email security threats are “very specific and targeted to you and your industry, and [no one is] immune — big or small,” he said.

Hookano also pointed to Barracuda’s free threat scan tool as an important business driver. He said many Barracuda resellers are using the threat scans to drive the sales process.

“We, to this point, have never run a threat scan that didn’t come back with at least some things that were wrong in the [email] network. … About 30% to 40% of the time, something is so bad that [customers] have to purchase immediately.”

Barracuda is looking to differentiate itself from its pure-play email security competitors by tapping into its portfolio, he noted. The company’s portfolio includes web filtering and firewall products, which feed threat data into Barracuda email security.

“If I’m a pure-play email security vendor now, I no longer have the ability to be as accurate as a portfolio company,” Hookano said.

Data from Barracuda’s remote monitoring and management product, Managed Workplace, which the vendor acquired from Avast earlier this year, also bolster email security capabilities.

“Our goal in the email market … is to use all of our other products and the footprint from our other products to make our email security signatures better and to continue to build on our lead there,” he said.

Equinix cites channel in Q2 bookings, customer wins

Equinix Inc., an interconnection and data center company based in Redwood City, Calif., cited channel sales as a key driver behind second-quarter bookings.

The company said channel partners contributed more than 25% of the bookings for the quarter ended June 30. And those bookings accounted for 60% of Equinix’s new customer wins during the quarter, according to the company. Equinix’s second-quarter revenue grew 10% year over year to $1.385 billion.

In a statement, Equinix said it has “deepened its engagement with high-priority partners to drive increased productivity and joint offer creation across its reseller and alliance partners.”

Those partners include Amazon, AT&T, Microsoft, Oracle, Orange, Telstra, Verizon and World Wide Technology, according to a company spokeswoman. New channel wins in the second quarter include a deal in which Equinix is partnering with Telstra to provide cloud connectivity at Genomics England.

Equinix has also partnered with Assured DP, a Rubrik-as-a-service provider, in a disaster recovery offering.

Other news

  • Microsoft partners could see a further boost in momentum for the company’s Teams calling and collaboration platform. Microsoft said it will retire Skype for Business Online on July 31, 2021, a move that should pave the way for Skype-to-Teams migrations. Microsoft officials speaking last month at the Inspire conference cited Teams as among the top channel opportunities for Microsoft’s 2020 fiscal year. Partners expect to find business in Teams training and governance. The Skype for Business Online retirement does not affect Skype Consumer services or Skype for Business Server, according to a Microsoft blog post.
  • Google said more than 90 partners have obtained Google Cloud Partner specializations in the first half of 2019. Partners can earn specializations in areas such as applications development, cloud migration, data analytics, education, work transformation, infrastructure, IoT, location-based services, machine learning, marketing analytics and security. Partners that acquired specializations during the first half of the year include Accenture, Agosto, Deloitte Consulting and Maven Wave Partners. The specialization announcement follows the launch of the Google Cloud Partner Advantage Program, which went live July 1.
  • Mimecast Ltd., an email and data security company based in Lexington, Mass., unveiled its Cyber Alliance Program, which aims to bring together security vendors into what it terms a “cyber-resilience ecosystem.” The program includes cybersecurity technology categories such as security information and event management; security orchestration, automation and response; firewall, threat intelligence and endpoint security. Mimecast said the program offers customers and partners purpose-built, ready-to-use integrations; out-of-the-box APIs; documented guides with sample code; and tutorials that explain how to use the integrations.
  • Two-thirds of channel companies said they have changed their customer experience tactics, with 10% reporting a move to an omnichannel approach for customer interaction in the past year. Those are among the results of a CompTIA survey of more than 400 channel companies. The survey cited customer recruitment and customer retention as areas where the most respondents reported deficiencies.
  • CompTIA also revealed this week that it will acquire Metacog, an assessment, certification and training software vendor based in Worcester, Mass. CompTIA said Metacog’s technology will be incorporated into the upcoming release of its online testing platform. Metacog’s technology uses AI, big data and IoT APIs.
  • Naveego unveiled Accelerator, a tool for partners to analyze data accuracy across a variety of sources. Accelerator “is a great starting point for our partners to get a feel for just how ready a customer’s data is to participate in [data-based projects]. In general, that makes the projects have a higher degree of success, as well as go much more smoothly,” said Derek Smith, CTO and co-founder of the company. Naveego works with about 10 partners, recently expanding its roster with Frontblade Systems, H2 Integrated Solutions, Mondelio and Narwal.
  • US Signal, a data center services provider based in Grand Rapids, Mich., said its DRaaS for VMware offering is generally available. The offering, based on VMware vCloud Availability, provides disaster recovery services for multi-tenant VMware clouds.
  • Synchronoss Technologies Inc., a cloud and IoT product provider based in Bridgewater, N.J., is working with distributor Arrow Electronics to develop and market an IoT smart building offering. The IoT offering is intended for telecom operators, service providers and system integrators.
  • Distributor Synnex Corp. inked a deal with Arista Networks to provide its data center and campus networking products.
  • Peerless Networks, a telecom services provider, named Ryan Patterson as vice president of channel sales. Patterson will oversee the recruitment of new master and direct agents for Peerless’ national channel program, the company said.
  • Nintex, a process management and workflow automation vendor, hired Michael Schultz as vice president of product, channel and field marketing, and Florian Haarhaus as vice president of sales for EMEA.

Market Share is a news roundup published every Friday.

Go to Original Article
Author:

Containers key for Hortonworks alliance on big data hybrid

NEW YORK — Hortonworks forged a deal with IBM and Red Hat to produce the Open Hybrid Architecture Initiative. The goal of the Hortonworks alliance is to build a common architecture for big data workloads running both on the cloud and in on-premises data centers.

Central to the Hortonworks alliance initiative is the use of Kubernetes containers. Such container-based data schemes for cloud increasingly appear to set the tone for big data architecture in future data centers within organizations.

Hortonworks’ deal was discussed as part of the Strata Data Conference here, where computing heavyweight Dell EMC also disclosed an agreement with data container specialist BlueData Software to present users with reference architecture that brings cloud-style containers on premises.

Big data infrastructure shifts

Both deals indicate changes are afoot in infrastructure for big data. Container-based data schemes for cloud are starting to show the way that data will be handled in the future within organizations.

The Hortonworks alliance hybrid initiative — as well as Dell’s and other reference architecture — reflects changes spurred by the multitude of analytics engines now available to handle data workloads and as big data applications move to the cloud, said Gartner analyst Arun Chandrasekaran in an interview.

“Historically, big data was about coupling compute and storage together. That worked pretty well when MapReduce was the sole engine. Now, there are multiple processing engines working on the same data lake,” Chandrasekaran said. “That means, in many cases, customers are thinking about decoupling compute and storage.”

De-linking computing and storage

Essentially, modern cloud deployments decouple compute and storage, Chandrasekaran said. This approach is seeing greater interest in containerizing big data workloads for portability, he noted.

We are decoupling storage and compute again.
Arun Murthychief product officer and co-founder, Hortonworks

The shifts in architecture toward container orchestration show people want to use their infrastructure more efficiently, Chandrasekaran said.

The Hortonworks alliance with Red Hat and IBM shows a basic change is underway for the Hadoop-style open source distributed data processing framework. Cloud and on-premises architectural schemes are blending.

“We are decoupling storage and compute again,” said Arun Murthy, chief product officer and co-founder of Hortonworks, based in Santa Clara, Calif., in an interview. “As a result, the architecture will be consistent whether processing is on premises or on cloud or on different clouds.”

The elastic cloud

This style of architecture pays heed to elastic cloud methods.

Strata Data Conference 2018
This week’s Strata Data Conference in New York included a focus on Hortonworks’ deal with IBM and Red Hat, an agreement between Dell EMC and BlueData, and more.

“In public cloud, you don’t keep the architecture up and running if you don’t have to,” Murthy said.

That’s compared with what Hadoop has done traditionally in the data center, where clusters were often configured and sitting ready for high-peak loads.

For Lars Herrmann, general manager of the integrated solutions business unit at Red Hat, based in Raleigh, N.C., the Hortonworks alliance project is a step toward bringing in a large class of data applications to run natively on the OpenShift container platform. It’s also about deploying applications more quickly.

“The idea of containerization of applications allows organizations to be more agile. It is part of the trend we see of people adopting DevOps methods,” Herrmann said.

Supercharging on-premises applications

For its part, Dell EMC sees spinning up data applications more quickly on premises as an important part of the reference architecture it has created with help from BlueData.

“With the container approach, you can deploy different software on demand to different infrastructure,” Kevin Gray, director of product marketing at Dell EMC, said in an interview at Strata Data.

The notion of multi-cloud support for containers is popular, and Hadoop management and deployment software providers are moving to support various clouds. At Strata, BlueData made its EPIC software available on Google Cloud Platform and Microsoft Azure. EPIC cloud support has been available on AWS.

Big data evolves to singular architecture

Tangible benefits will accrue as big data architecture evolves shops to a more singular architecture for data processing on the cloud and in the data center, said Mike Matchett, analyst and founder of Small World Big Data, in an interview at the conference.

“Platforms need to be built such that they can handle distributed work and deal with distributed data. They will be the same on premises as on the cloud. And, in most cases, they will be hybridized, so the data and the processing can flow back and forth,” Matchett said.

There still will be some special optimizations for performance, Matchett added. IT managers will make decisions based on different workloads, as to where particular analytics processing will occur.