Tag Archives: workloads

Nutanix Objects 2.0 lays groundwork for hybrid cloud

Nutanix pitched its latest Objects update as scale-out object storage workloads, but experts said the hyper-converged infrastructure specialist is likely preparing for a push into the cloud.

Nutanix Objects 2.0 introduced features aimed at big data workloads. The new multicluster support consolidates Nutanix clusters to a single namespace, allowing for simpler, single-console management. Nutanix Objects 2.0 also added a 240 TB node that is larger than any other Nutanix HCI node, allowing more capacity per cluster. The update also added WORM support and Splunk certification.

Nutanix Objects, which launched in August 2019, provides software-defined object storage and is a stand-alone product from Nutanix HCI. Greg Smith, vice president of product marketing at Nutanix, said typical use cases included unstructured data archiving, big data and analytics. He also noticed an uptick in cloud-native application development in AWS S3 environments. Nutanix HCI software uses an S3 interface.

“We see increasing demand for object storage, particularly for big data,” Smith said.

Supporting cloud-native development is the real endgame of Nutanix Objects 2.0, said Eric Slack, senior analyst at Evaluator Group. The new features and capabilities aren’t to capture customers with traditional object storage use cases, because it’s not cost-effective to put multiple petabytes on HCI. He said no one is waiting for an S3 interface before buying into HCI.

However, that S3 interface is important because, according to Slack, “S3 is what cloud storage talks.”

Slack believes the enhancements to Nutanix Objects will lay the groundwork for Nutanix Clusters, which is currently in beta. Nutanix Clusters allows Nutanix HCI to run in the cloud and communicate with Nutanix HCI running in the data center. This means organizations can develop applications on-site and run them in the cloud, or vice versa.

“I think that’s why they’re doing this — they’re getting ready for Nutanix Clusters,” Slack said. “This really plays into their cloud design, which is a good idea.”

Organizations want that level of flexibility right now because they do not know which workloads are more cost-efficient to run on premises or in the cloud. Having that same, consistent S3 interface is ideal for IT, Slack said, because it means their applications will run wherever it’s cheaper.

Some organizations have been burned during cloud’s initial hype and moved many of their workloads there, only to find their costs have gone up. Slack said that has led to repatriation back into data centers as businesses do the cost analysis.

“Cloud wasn’t everything we thought it was,” Slack said.

Scott Sinclair, senior analyst at Enterprise Strategy Group (ESG), came to a similar conclusion about the importance of Nutanix Objects. HCI is about consolidating and simplifying server, network and storage, and Objects expands Nutanix HCI into covering object storage’s traditional use cases: archive and active archive. However, there are growing use cases centered around developing in S3.

“We’re seeing the development of apps that write to an S3 API that may not be what we classify as traditional archive,” Sinclair said.

Screenshot of Nutanix Objects 2.0
Nutanix’s S3 interface streamlines interaction between on premises and cloud.

Citing ESG’s 2020 Technology Spending Intentions survey, Sinclair said 64% of IT decision-makers said their IT environments are more complex than they were years ago. Coupled with other data pointing to skills shortages in IT, Sinclair said organizations are currently looking for ways to simplify their data centers, resulting in interest in HCI.

That same survey also found 24% of respondents said they needed to go hybrid, with the perception that using the cloud is easier. Sinclair said this will logically lead to an increase in the use of S3 protocols, and why Nutanix Objects is uniquely well-positioned. Right now, IT administrators know they need to be using both on-premises and cloud resources, but they don’t know to what extent they should be using either. That’s why businesses are taking the most flexible approach.

“Knowing that you don’t know is a smart position to take,” Sinclair said.

Go to Original Article
Author:

Cloud consultants set for massive workload shift to cloud

Cloud consultants take heed: Customers are pushing the bulk of their workloads to cloud infrastructure and a significant number are adopting related technologies such as containers.

AllCloud, a cloud and managed service provider based in Tel Aviv, Israel, said 85% of the 150 respondents to its cloud infrastructure survey expect to operate the majority of their workloads in the cloud by the end of 2020. Twenty-four percent of the IT decision-makers polled said they plan to be cloud-only organizations. The respondents work for companies with at least 300 employees and represent a range of industries.

AllCloud’s survey, published Jan. 15, also points to growing acceptance of containers, a trend other cloud consultants view as accelerating. More than 56% of respondents reported at least half of their cloud workloads use containers or microservices.

AllCloud CEO Eran Gil said cloud adoption, as reflected in the survey sample, is further along than he anticipated. He also said the amount of containers adoption surprised him.  

“It is interesting to see how many organizations are leveraging them,” he said of containers. “It’s far more than I expected to see.”

Eran Gil, CEO at AllCloudEran Gil

For cloud consultants, the transition from small-scale, individual workload migrations to more decisive shifts to the cloud may open opportunities for IT modernization.

“We are talking to [customers] about modernizing their infrastructure — not just simply taking what they have on premises and hosting it on AWS or other vendors,” Gil said.

Amid broader cloud adoption, AllCloud plans to expand in North America. The company in 2018 launched operations in North America, acquiring Figur8, a Salesforce partner with offices in San Francisco, Toronto, New York City and Vancouver, B.C. AllCloud is a Salesforce Platinum partner and an AWS Premier Consulting Partner.

“We are focusing on growing North America in particular,” Gil said, noting the company has received a new round of funding to support its expansion. “You will hear us announce acquisitions this year in either one of our ecosystems.”

The funding will also help AllCloud grow organically. Gil said the company plans to hire an AWS practice leader, who will report to Doug Shepard, AllCloud’s general manager for North America. Shepard previously was president of the Google business unit at Cloud Sherpas, a cloud consultancy Gil co-founded in 2008. Accenture acquired Cloud Sherpas in 2015.

Gil said the fundamental drivers of cloud adoption have changed dramatically since the launch of Cloud Sherpas. Back then, he said, cost was the main consideration, and security and reliability concerns were obstacles to acceptance. Security, however, emerged in AllCloud’s survey as the top consideration in cloud selection, followed by reliability. Cost ranked fourth in the list of adoption drivers.

“All the factors 10, 12 years ago that were the determents are now the drivers,” Gil said. 

New channel hires

  • DevOps lifecycle tool provider GitLab has appointed Michelle Hodges as vice president of global channels. GitLab, which plans to go public this year, said Hodges’ hiring is part of an initiative to ramp up the company’s channel strategy. Hodges joins GitLab from Gigamon, where she served as vice president of worldwide channels.
  • Avaya named William Madison as its vice president of North America cloud sales. Madison’s prior roles included vice president of global channel development and channel chief at Masergy Communications.
  • Managed services automation company BitTitan hired Kirk Swanson as its corporate development associate. Swanson will help BitTitan pursue acquisitions in the enterprise cloud market, targeting companies with SaaS products and relationships with IT service providers and MSPs, the company said. Prior to BitTitan, Swanson served as an associate at investment firm D.A. Davidson & Co.
  • Exclusive Networks, a cloud and cybersecurity distributor, named Christine Banker as vice president of North American sales. Banker will lead vendor recruitment, inside and field sales, and Exclusive’s PC and server business, among other departments and teams, the company said.
  • Anexinet Corp., a digital business solutions provider based in Philadelphia, has appointed Suzanne Lentz as chief marketing officer. She was previously chief marketing officer of Capgemini Invent NA.
  • Workspace-as-a-service vendor CloudJumper named Amie Ray as its enterprise channel sales manager. Ray comes to CloudJumper from PrinterLogic, where she was national channel account manager.

Other news

  • WESCO International Inc. has agreed to acquire distributor Anixter International Inc. for $4.5 billion. WESCO outbid Clayton, Dubilier & Rice LLC. The deal is expected to close in the second or third quarter of 2020. According to Pittsburgh-based WESCO, the combined entity would have revenue of about $17 billion. The pending deal follows Apollo Global Management’s agreement to acquire Tech Data Corp., a distributor based in Tampa, Fla.
  • Lemongrass Consulting, a professional services and managed service provider based in Atlanta, has completed a $10 million Series C round of financing, a move the company said will help it build out its senior leadership team, boost product development, and expands sales and marketing. Rodney Rogers, co-founder and general partner of Blue Lagoon Capital, joins Lemongrass as chairman. Blue Lagoon led the new funding round. Mike Rosenbloom is taking on the group CEO role at Lemongrass. He was formerly managing director of Accenture’s Intelligent Cloud & Infrastructure business. Walter Beek, who has been group CEO at Lemongrass, will stay on with company as co-founder and chief innovation officer. Lemongrass focuses on SAP applications running on AWS infrastructure.
  • Strategy and revenue are getting a heightened focus among CIOs, according to a Logicalis survey. The London-based IT solutions provider’s poll of 888 global CIOs found 61% of the respondents “spent more time on strategic planning in the last 12 months, while 43% are now being measured on their contribution to revenue growth.” The emphasis on strategy and revenue comes at the expense of innovation. About a third of the CIOs surveyed said the time available to spend on innovation has decreased over the last 12 months.
  • IT infrastructure management vendor Kaseya said it ended 2019 with a valuation exceeding $2 billion. Kaseya added more than 5,000 new customers and had more than $300 million in annual bookings, according to the company. Kaseya noted that the company had an organic growth rate of about 30%.
  • Cybersecurity vendor WatchGuard Technologies updated its FlexPay program with automated, monthly billing for its network security hardware and services. Partners can acquire subscriptions from WatchGuard’s distributor partners in various purchasing models, including one- and three-year contracts and pay-as-you-go terms, WatchGuard said. In the U.S., WatchGuard Subscriptions are available exclusively through the Synnex Stellr online marketplace.
  • Copper, which provides CRM for G Suite, rolled out its 2020 Partner Ambassador Program. The referral program has four partner tiers with incremental incentives, marketing resources, and training and certifications.
  • GTT Communications Inc., a cloud networking provider based in McLean, Va., has added Fortinet Secure SD-WAN to its SD-WAN service offering.
  • EditShare, a storage vendor that specializes in media creation and management, signed Key Code Media to its channel program. Key Code Media is an A/V, broadcast and post-production reseller and systems integrator.
  • Accenture opened an intelligent operation center in St. Catharines, Ont., as a hub for its intelligent sales and customer operations business. Accenture said the location is the company’s third intelligent operations center in Canada and its second in the Niagara region.

Market Share is a news roundup published every Friday.

Go to Original Article
Author:

IBM Spectrum Protect supports container backups

IBM Storage will tackle data protection for containerized and cloud-based workloads with upcoming updates to its Spectrum Protect Plus backup product and Red Hat OpenShift container platform.

Like other vendors, IBM has offered primary storage options for container-based applications. Now IBM Spectrum Protect Plus will support backup and recovery of persistent container volumes for customers who use Kubernetes orchestration engines.

IBM Spectrum Protect Plus supports the Container Storage Interface (CSI) to enable Kubernetes users to schedule snapshots of persistent Ceph storage volumes, according to IBM. The company said the Spectrum Protect backup software offloads copies of the snapshots to repositories outside Kubernetes production environments.

IBM will offer a tech preview of the container backup support in the OpenShift platform that it gained through its Red Hat acquisition. The tech preview is scheduled for this year with general availability expected in the first quarter of 2020, subject to the availability of CSI snapshot support in Red Hat OpenShift, according to Eric Herzog, CMO and vice president of world storage channels at IBM.

“The problem with Kubernetes is there’s really no standard storage architecture. So you’re starting to see all of the vendors scramble to implement CSI driver support, which links your Kubernetes containers with backend storage,” said Steve McDowell, a senior analyst at Moor Insights and Strategy.

CSI snapshots

McDowell said IBM and other vendors are stepping up to provide CSI drivers for general-purpose backend storage for containers. He said few, if any, tier one vendors support CSI snapshots for data protection of Kubernetes clusters.

But enterprise demand is still nascent for persistent storage for containerized applications and, by extension, backup and disaster recovery, according to IDC research manager Andrew Smith. He said many organizations are still in the early discovery or initial proof of concept phase.

Smith said IBM can fill a gap in the OpenShift Kubernetes ecosystem if it can establish Spectrum Protect as a platform for data protection and management moving forward.

Randy Kerns, a senior strategist and analyst at Evaluator Group, said early adopters often stand up their container-based applications separately from their virtual machine environments.

“Now you’re starting to see them look and say, ‘What data protection software do I have that’ll work with containers? And, does that work in my virtual machine environment as well?'” Kerns said. “This is an early stage thing for a lot of customers, but it’s really becoming more current as we go along. OpenShift is going to be one of the major deployment environments for containers, and IBM and Red Hat have a close relationship now.”

IBM Spectrum Protect Plus for VMware

In virtual environments, VMware administrators will be able to deploy IBM Spectrum Protect Plus in VMware Cloud on AWS. IBM said Spectrum Protect would support VMware Cloud on AWS, in addition to the IBM Cloud and various on-premises options available in the past. Herzog said IBM Spectrum Protect Plus would support backups to additional public clouds starting in 2020, in keeping with the storage division’s long-standing multi-cloud strategy.

Also this week, IBM introduced a new TS7770 Virtual Tape Library built with its latest Power 9 processors and higher density disks. The TS7770 will target customers of IBM’s new z15 mainframe, Herzog said.

Go to Original Article
Author:

IT pros look to VMware’s GPU acceleration projects to kick-start AI

SAN FRANCISCO — IT pros who need to support emerging AI and machine learning workloads see promise in a pair of developments VMware previewed this week to bolster support for GPU-accelerated computing in vSphere.

GPUs are uniquely suited to handle the massive processing demands of AI and machine learning workloads, and chipmakers like Nvidia Corp. are now developing and promoting GPUs specifically designed for this purpose.

A previous partnership with Nvidia introduced capabilities that allowed VMware customers to assign GPUs to VMs, but not more than one GPU per VM. The latest development, which Nvidia calls its Virtual Compute Server, allows customers to assign multiple virtual GPUs to a VM.

Nvidia’s Virtual Compute Server also works with VMware’s vMotion capability, allowing IT pros to live migrate a GPU-accelerated VM to another physical host. The companies have also extended this partnership to VMware Cloud on AWS, allowing customers to access Amazon Elastic Compute Cloud bare-metal instances with Nvidia T4 GPUs.

VMware gave the Nvidia partnership prime time this week at VMworld 2019, playing a prerecorded video of Nvidia CEO Jensen Huang talking up the companies’ combined efforts during Monday’s general session. However, another GPU acceleration project also caught the eye of some IT pros who came to learn more about VMware’s recent acquisition of Bitfusion.io Inc.

VMware acquired Bitfusion earlier this year and announced its intent to integrate the startup’s GPU virtualization capabilities into vSphere. Bitfusion’s FlexDirect connects GPU-accelerated servers over the network and provides the ability to assign GPUs to workloads in real time. The company compares its GPU vitalization approach to network-attached storage because it disaggregates GPU resources and makes them accessible to any server on the network as a pool of resources.

The software’s unique approach also allows customers to assign just portions of a GPU to different workloads. For example, an IT pro might assign 50% of a GPU’s capacity to one VM and 50% to another VM. This approach can allow companies to more efficiently use its investments in expensive GPU hardware, company executives said. FlexDirect also offers extensions to support field-programmable gate arrays and application-specific integrated circuits.

“I was really happy to see they’re doing this at the network level,” said Kevin Wilcox, principal virtualization architect at Fiserv, a financial services company. “We’ve struggled with figuring out how to handle the power and cooling requirements for GPUs. This looks like it’ll allow us to place to our GPUs in a segmented section of our data center that can handle those power and cooling needs.”

AI demand surging

Many companies are only beginning to research and invest in AI capabilities, but interest is growing rapidly, said Gartner analyst Chirag Dekate.

“By end of this year, we anticipate that one in two organizations will have some sort of AI initiative, either in the [proof-of-concept] stage or the deployed stage,” Dekate said.

In many cases, IT operations professionals are being asked to move quickly on a variety of AI-focused projects, a trend echoed by multiple VMworld attendees this week.

“We’re just starting with AI, and looking at GPUs as an accelerator,” said Martin Lafontaine, a systems architect at Netgovern, a software company that helps customers comply with data locality compliance laws.

“When they get a subpoena and have to prove where [their data is located], our solution uses machine learning to find that data. We’re starting to look at what we can do with GPUs,” Lafontaine said.

Is GPU virtualization the answer?

Recent efforts to virtualize GPU resources could open the door to broader use of GPUs for AI workloads, but potential customers should pay close attention to benchmark testing, compared to bare-metal deployments, in the coming years, Gartner’s Dekate said.

So far, he has not encountered a customer using these GPU virtualization tactics for deep learning workloads at scale. Today, most organizations still run these deep learning workloads on bare-metal hardware.

 “The future of this technology that Bitfusion is bringing will be decided by the kind of overheads imposed on the workloads,” Dekate said, referring to the additional compute cycles often required to implement a virtualization layer. “The deep learning workloads we have run into are extremely compute-bound and memory-intensive, and in our prior experience, what we’ve seen is that any kind of virtualization tends to impose overheads. … If the overheads are within acceptable parameters, then this technology could very well be applied to AI.”

Go to Original Article
Author:

Windows troubleshooting tools to improve VM performance

Whether virtualized workloads stay on premises or move to the cloud, support for those VMs remains in the data center with the administrator.

When virtualized workloads don’t perform as expected, admins need to roll up their sleeves and break out the Windows troubleshooting tools. Windows has always had some level of built-in diagnostic ability, but it only goes so deep.

Admins need to stay on top of ways to analyze ailing VMs, but they also need to find ways to trim deployments to control resource use and costs for cloud workloads.

VM Fleet adds stress to your storage

VM Fleet tests the performance of your storage infrastructure by simulating virtual workloads. VM Fleet uses PowerShell to create a collection of VMs and run a stress test against the allocated storage.

This process verifies that your storage meets expectations before deploying VMs to production. VM Fleet doesn’t help troubleshoot issues, but it helps confirm the existing performance specifications before you ramp up your infrastructure. After the VMs are in place, you can use VM Fleet to perform controlled tests of storage auto-tiering and other technologies designed to adjust workloads during increased storage I/O.

VM Fleet tests the performance of your storage infrastructure by simulating virtual workloads.

Sysinternals utilities offer deeper insights

Two Windows troubleshooting tools from the Microsoft Sysinternals collection, Process Explorer and Process Monitor, should be staples for any Windows admin.

Process Explorer gives you in-depth detail, including the dynamic link library and memory mapped files loaded by a process. Process Explorer also lets you dig in deep to uncover issues rather than throwing more resources at an application and, thus, masking the underlying problem.

Process Explorer
Process Explorer lets administrators do a technical deep dive into Windows processes that the Task Manager can’t provide.

Process Monitor captures real-time data of process activity, and Registry and file system changes on Windows systems. It also provides detailed information on the process trees.

Administrators can use Process Monitor’s search and filtering functions to help administrator focus on particular events that occur over a longer period of time.

VMMap and RAMMap detail the memory landscape

Another Sysinternals tool called VMMap shows what types of virtual memory is assigned to a process and its committed memory, which is the virtual memory reserved by the operating system. This tool shows where allocated memory is used with a visual presentation.

VMMap measurements
VMMap shows how the operating system maps physical memory and uses memory in the virtual space to help administrators analyze how applications work with memory resources.

VMMap doesn’t check the hypervisor layer, but it does detail virtual memory use provided by the OS. Combined with other tools that view the hypervisor, VMMap gives a complete picture of the applications’ memory usage.

Another tool called RAMMap is similar to VMMap, but it works at the operating system level rather than the process level. Administrators can use both tools to get a complete picture of how applications are getting and using the memory.

BgInfo puts pertinent information on display

BgInfo is a small Sysinternals utility that displays selected system information on the desktop, such as the machine name, IP address, patch version and storage information.

While it’s not difficult to find these settings, making them more visible can help when you log into multiple VMs in a short amount of time. It’s also helpful to avoid installations on the wrong VM or even rebooting the wrong VM.

Notre Dame uses N2WS Cloud Protection Manager for backup

Coinciding with its decision to eventually close its data center and migrate most of its workloads to the public cloud, the University of Notre Dame’s IT team switched to cloud-native data protection.

Notre Dame, based in Indiana, began its push to move its business-critical applications and workloads to Amazon Web Services (AWS) in 2014. Soon after, the university chose N2WS Cloud Protection Manager to handle backup and recovery.

Now, 80% of the applications used daily by faculty members and students, as well as the data associated with those services, lives on the cloud. The university protects more than 600 AWS instances, and that number is growing fast.

In a recent webinar, Notre Dame systems engineer Aaron Wright talked about the journey of moving a whopping 828 applications to the cloud, and protecting those apps and their data.  

N2WS, which was acquired by Veeam earlier this year, is a provider of cloud-native, enterprise backup and disaster recovery for AWS. The backup tool is available through the AWS Marketplace.

Wright said Notre Dame’s main impetus for migrating to the cloud was to lower costs. Moving services to the cloud would reduce the need for hardware. Wright said the goal is to eventually close the university’s on-premises primary data center.

“We basically put our website from on premises to the AWS account and transferred the data, saw how it worked, what we could do. … As we started to see the capabilities and cost savings [of the cloud], we were wondering what we could do to put not just our ‘www’ services on the cloud,” he said.

Wright said Notre Dame plans to move 90% of its applications to the cloud by the end of 2018. “The data center is going down as we speak,” he said.

We looked at what it would cost us to build our own backup software and estimated it would cost 4,000 hours between two engineers.
Aaron Wrightsystems engineer, Notre Dame

As a research organization that works on projects with U.S. government agencies, Notre Dame owns sensitive data. Wright saw the need for a centralized backup software to protect that data, and found N2WS Cloud Protection Manager through AWS Marketplace. Wright could not find many good commercial options for protecting that cloud data.

“We looked at what it would cost us to build our own backup software and estimated it would cost 4,000 hours between two engineers,” he said. By comparison, Wright said his team deployed Cloud Protection Manger in less than an hour.

Wright said N2WS Cloud Protection Manager rescued Notre Dame’s data at least twice since the installation. One came after Linux machines failed to boot after application of a patch, and engineers restored data from snapshots within five minutes. Wright said his team used the snapshots to find and detach a corrupted Amazon Elastic Block Store volume, and then manually created and attached a new volume.

In another incident, Wright said the granularity of the N2WS Cloud Protection Manager backup capabilities proved valuable.

“Back in April-May 2018, we had to do a single-file restore through Cloud Protection Manager. Normally, we would have to have taken the volume and recreated a 300-gig volume,” he said. Locating and restoring that single file so quickly allowed him to resolve the incident within five minutes.

Silver Peak SD-WAN adds service chaining, partners for cloud security

Silver Peak boosted its software-defined WAN security for cloud-based workloads with the introduction of three security partners.

Silver Peak Unity EdgeConnect customers can now add security capabilities from Forcepoint, McAfee and Symantec for layered security in their Silver Peak SD-WAN infrastructure, the vendor said in a statement. The three security newcomers join existing Silver Peak partners Check Point, Fortinet, OPAQ Networks, Palo Alto Networks and Zscaler.

Silver Peak SD-WAN allows customers to filter application traffic that travels to and from cloud-based workloads through security processes from third-party security partners. Customers can insert virtual network functions (VNFs) through service chaining wherever they need the capabilities, which can include traffic inspection and verification, distributed denial-of-service protection and next-generation firewalls.

These partnership additions build on Silver Peak’s recent update to incorporate a drag-and-drop interface for service chaining and enhanced segmentation capabilities. For example, Silver Peak said a typical process starts with customers defining templates for security policies that specify segments for users and applications. This segmentation can be created based on users, applications or WAN services — all within Silver Peak SD-WAN’s Unity Orchestrator.

Once the template is complete, Silver Peak SD-WAN launches and applies the security policies for those segments. These policies can include configurations for traffic steering, so specific traffic automatically travels through certain security VNFs, for example. Additionally, Silver Peak said customers can create failover procedures and policies for user access.

Enterprises are increasingly moving their workloads to public cloud and SaaS environments, such as Salesforce or Microsoft Office 365. Securing that traffic — especially traffic that travels directly over broadband internet connections — remains top of mind for IT teams, however. By service chaining security functions from third-party security companies, Silver Peak SD-WAN customers can access those applications more securely, the company said.

Silver Peak SD-WAN holds 12% of the $162 million SD-WAN market, according to a recent IHS Markit report, which ranks the vendor third after VMware-VeloCloud and Aryaka.

ONF pinpoints four technology areas to develop

The Open Networking Foundation unveiled four new supply chain partners that are working to develop technology reference designs based on ONF’s strategic plan. Along with the four partners — Adtran, Dell EMC, Edgecore Networks and Juniper Networks — ONF finalized the focus areas for the initial reference designs.

ONF’s reference designs provide blueprints to follow while building open source platforms that use multiple components, the foundation said in a statement. While the broad focus for these blueprints looks at edge cloud, ONF targeted four specific technology areas:

  • SDN-enabled broadband access. This reference design is based on a variant of the Residential Central Office Re-architected as a Datacenter project, which is designed to virtualize residential access networks. ONF’s project likewise supports virtualized access technologies.
  • Network functions virtualization fabric. This blueprint develops work on leaf-spine data center fabric for edge applications.
  • Unified programmable and automated network. ONF touts this as a next-generation SDN reference design that uses the P4 language for data plane programmability.
  • Open disaggregated transport network. This reference design focuses on open multivendor optical networks.

Adtran, Dell EMC, EdgeCore and Juniper each apply its own technology expertise to these reference design projects, ONF said. Additionally, as supply chain partners, they’ll aid operators in assembling deployment environments based on the reference designs.

Hybrid cloud security architecture requires rethinking

Cloud security isn’t for the squeamish. Protecting cloud-based workloads and designing a hybrid cloud security architecture has become a more difficult challenge than first envisioned, said Jon Oltsik, an analyst at Enterprise Strategy Group in Milford, Mass.

“The goal was simple,” he said. Enterprises wanted the same security they had for their internal workloads to be extended to the cloud.

But using existing security apps didn’t work out so well. In response, enterprises tried to concoct their own, but that meant the majority of companies had separate security foundations for their on-premises and cloud workloads, Oltsik said.

The answer in creating a robust hybrid cloud security architecture is central policy management, where all workloads are tracked, policies and rules applied and networking components displayed in a centralized console. Firewall and security vendors are beginning to roll out products supporting this strategy, Oltsik said, but it’s still incumbent upon CISOs to proceed carefully.

“The move to central network security policy management is a virtual certainty, but which vendors win or lose in this transition remains to be seen.”

Read the rest of what Oltsik had to say about centralized cloud security.

User experience management undergoing a shift

User experience management, or UEM, is a more complex concept than you may realize.

Dennis Drogseth, an analyst at Enterprise Management Associates in Boulder, Colo., described the metamorphosis of UEM, debunking the notion that the methodology is merely a subset of application performance management.

Instead, Drogseth said, UEM is multifaceted, encompassing application performance, business impact, change management, design, user productivity and service usage.

According to EMA research, over the last three years the two most important areas for UEM is application performance and portfolio planning and optimization. Valuable insights can be provided by UEM to assist both IT and business.

One question surrounding UEM is whether it falls into the realm of IT or business. In years past EMA data suggested 20% of networking staffers considered UEM a business concern, 21% an IT concern and 59% said UEM should be equally an IT and business concern. Drogseth agreed wholeheartedly with the latter group.

Drogseth expanded on the usefulness of UEM in his blog, including how UEM is important to DevOps and creating an integrated business strategy.

Mixed LPWAN results, but future could be bright

GlobalData analyst Kitty Weldon examined the evolving low-power WAN market in the wake of the 2018 annual conference in London.

Mobile operators built out their networks for LPWAN in 2017, Weldon said,  and are now starting to look for action. Essentially every internet of things (IoT) service hopped on the LPWAN bandwagon; now they await the results.

So far, there have been 48 launches by 26 operators.

The current expectation remains lowered costs and improved battery life will eventually usher in thousands of new low-bandwidth IoT devices connecting to LPWANs. However, Weldon notes that it’s still the beginning of the LPWAN era, and right now feelings are mixed.

“Clearly, there is some concern in the industry that the anticipated massive uptake of LPWANs will not be realized as easily as they had hoped, but the rollouts continue and optimism remains, tempered with realistic concerns about how best to monetize the investments.”

Read more of what Weldon had to say here.

Azure Backup service adds layer of data protection

more important to have a solid backup strategy for company data and workloads. Microsoft’s Azure Backup service has matured into a product worth considering due to its centralized management and ease of use.

Whether it’s ransomware or other kinds of malware, the potential for data corruption is always lurking. That means that IT admins need a way to streamline backup procedures with the added protection and high availability made possible by the cloud.

Azure Backup protects on-premises workloads — SharePoint, SQL Server, Exchange, file servers, client machines, VMs, and cloud resources like infrastructure-as-a-service VMs — into one recovery vault with solid data protection and restore capabilities. Administrators can monitor and start backup and recovery activities from a single Azure-based portal. After the initial setup, this arrangement lightens the burden on IT because off site backups require minimal time and effort to maintain.

How Azure Backup works

The Azure Backup service stores data in what Microsoft calls a recovery vault, which is the central storage locker for the service whether the backup targets are in Azure or on premises.

Whether it’s ransomware or other kinds of malware, the potential for data corruption is always lurking.

The administrator needs to create the recovery vault before the Azure Backup service can be used. From the Azure console, select All services, type in Recovery Services and select Recovery Services vaults from the menu. Click Add, give it a name, associate it with an Azure subscription, choose a resource group and location, and click Create.

From there, to back up on-premises Windows Server machines, open the vault and click the Backup button. Azure will prompt for certain information: whether the workload is on premises or in the cloud and what to back up — files and folders, VMs, SQL Server, Exchange, SharePoint instances, system state information, and data to kick off a bare-metal recovery. When this is complete, click the Prepare Infrastructure link.

[embedded content]

Configure backup for a Windows machine

The Microsoft Azure Recovery Services Agent (MARS) handles on-premises backups. Administrators download the MARS agent from the Prepare Infrastructure link — which also supplies the recovery vault credentials — and install it on the machines to protect. MARS picks up the recovery vault credentials to link the MARS agent instances of the on-premises machine to the Azure subscription and attendant recovery vault.

Azure Backup pricing

Microsoft determines Azure Backup pricing based on two components: the number of protected VMs or other instances — Microsoft charges for each discrete item to back up — and the amount of backup data stored within the service. The monthly pricing is:

  • for instances up to 50 GB, each instance is $5 per month, plus storage consumed;
  • for instances more than 50 GB, but under 500 GB, each instance is $10, plus storage consumed; and
  • for instances more than 500 GB, each instance is $10 per nearest 500 GB increment, plus storage consumed.

Microsoft bases its storage prices on block blob storage rates, which vary based on the Azure region. While it’s less expensive to use locally redundant blobs than geo-redundant blobs, local blobs are less fault-tolerant. Restore operations are free; Azure does not charge for outbound traffic from Azure to the local network.

Pros and cons of the Azure Backup service

The service has several features that are beneficial to the enterprise:

  • There is support to back up on-premises VMware VMs. Even though Azure is a Microsoft cloud service, the Azure Backup product will take VMware VMs as they are and back them up. It’s possible to install the agent inside the VM on the Windows Server workload, but it’s neater and cleaner to just back up the VM.
  • Administrators manage all backups from one console regardless of the target location. Microsoft continually refines the management features in the portal, which is very simple to use.
  • Azure manages storage needs and automatically adjusts as required. This avoids the challenges and capacity limits associated with on-premises backup tapes and hard drives.

The Azure Backup service isn’t perfect, however.

  • It requires some effort to understand pricing. Organizations must factor in what it protects and how much storage those instances will consume.
  • The Azure Backup service supports Linux, but it requires the use of a customized copy of System Center Data Protection Manager (DPM), which is more laborious compared to the simplicity and ease of MARS.
  • Backing up Exchange, SharePoint and SQL workloads requires the DPM version that supports those products. Microsoft includes it with the service costs, so there’s no separate licensing fee, but it still requires more work to deploy and understand.

The Azure Backup service is one of the more compelling administrative offerings from Microsoft. I would not recommend it as a company’s sole backup product — local backups are still very important, and even more so if time to restore is a crucial metric for the enterprise — but Azure Backup is a worthy addition to a layered backup strategy.

Azure migration takes hostile approach to lure VMware apps

The two biggest public cloud providers have set their sights on VMware workloads, though they’re taking different approaches to accommodate the hypervisor heavyweight and its customers.

A little over a year after Amazon Web Services (AWS) and VMware pledged to build a joint offering to bridge customers’ public and private environments, Microsoft this week introduced a similar service for its Azure public cloud. There’s one important distinction, however: VMware is out of the equation, a hostile move met with equal hostility from VMware, which said it would not support the service.

Azure Migrate offers multiple ways to get on-premises VMware workloads to Microsoft’s public cloud. Customers now can move VMware-based applications to Azure with a free tool to assess their environments, map out dependencies and migrate using Azure Site Recovery. Once there, customers can optimize workloads for Azure via cost management tools Microsoft acquired from Cloudyn.

This approach eschews the VMware virtualization and adapts these applications into a more cloud-friendly architecture that can use a range of other Azure services. A multitude of third-party vendors offer similar capabilities. It’s the other part of the Azure migration service that has drawn the ire of VMware.

VMware virtualization on Azure is a bare-metal subset of Azure Migrate that can run a full VMware stack on Azure hardware. It’s expected to be generally available sometime next year. This offering is a partnership with unnamed VMware-certified partners and VMware-certified hardware, but it notably cuts VMware out of the process, and out of the revenue stream.

In response, VMware criticized Microsoft characterization of the Azure migration service as part of a transition to public cloud. In a blog post, Ajay Patel, VMware senior vice president, cited the lack of joint engineering between VMware and Microsoft and said the company won’t recommend or support the product.

This isn’t the first time these two companies have butted heads. Microsoft launched Hyper-V almost a decade ago with similar aggressive tactics to pull companies off VMware’s hypervisor, said Steve Herrod, who was CTO at VMware at the time. Herrod is currently managing director at venture capital firm General Catalyst.

Part of the motivation here could be Microsoft posturing either to negotiate a future deal with VMware or to ensure it doesn’t lose out on these types of migration, Herrod said. And of course, if VMware had its way, its software stack would be on all the major clouds, he added.

Jeff Kato, analystJeff Kato

VMware on AWS, which became generally available in late August, is operated by VMware, and through the company’s Cloud Foundation program ports its software-defined data centers to CenturyLink, Fujitsu, IBM Cloud, NTT Communications, OVH and Rackspace. The two glaring holes in that swath of partnerships are Azure and Google Cloud, widely considered to be the second and third most popular public clouds behind AWS.

Companies have a mix of applications, some of which are well-suited to transition to the cloud, and others must stay inside a private data center or can’t be re-architected for the cloud. Hence, a hybrid cloud strategy has become an attractive option, and VMware’s recent partnerships have made companies feel more comfortable with the public cloud and curb the management of their own data centers.

“I talk to a lot of CIOs and they love the fact that they can buy VMware and now feel VMware has given them the all-clear to being in the cloud,” Herrod said. “It’s purely promise that they’re not locked into running VMware in their own data center that has caused them to double down on VMware.”

The fact that they have to offer VMware bare metal to accelerate things tells you there are workloads people are reluctant to move to the public cloud, whether that’s on Hyper-V or even AWS.
Jeff Katoanalyst, Taneja Group

VMware virtualization on Azure is also an acknowledgement that some applications are not good candidates for the cloud-native approach, said Jeff Kato, an analyst at Taneja Group in Hopkinton, Mass.

“The fact that they have to offer VMware bare metal to accelerate things tells you there are workloads people are reluctant to move to the public cloud, whether that’s on Hyper-V or even AWS,” he said.

Some customers will prefer VMware on AWS, but it won’t be a thundering majority, said Carl Brooks, an analyst at 451 Research. There’s also no downside for Microsoft to support what customers already do, and the technical aspect of this move is relatively trivial, he added.

“It’s a buyer’s market, and none of the major vendors are going to benefit from trying to narrow user options — quite the opposite,” Brooks said.

Perhaps it’s no coincidence that Microsoft debuted the Azure migration service in the days leading up to AWS’ major user conference, re:Invent, where there is expected to be more talk about the partnership between Amazon and VMware. It’s also notable that AWS is only a public cloud provider, so it doesn’t have the same level of competitive friction as there has been historically between Microsoft and VMware, Kato said.

“Microsoft [is] trying to ride this Azure momentum to take more than their fair share of [the on-premises space], and in order to do that, they’re going to have to come up with a counter attack to VMware on AWS,” he said.

Despite VMware’s lack of support for the Azure migration service, it’s unlikely it can do anything to stop it, especially if it’s on certified hardware, Kato said. Perhaps VMware could somehow interfere with how well the VMware stack integrates with native Azure services, but big enterprises could prevent that, at least for their own environments.

“If the customer is big enough, they’ll force them to work together,” Kato said.

Trevor Jones is a senior news writer with SearchCloudComputing and SearchAWS. Contact him at [email protected].