Tag Archives: company

Liquidware user experience monitoring fills gap in DaaS migration

When IT leaders at a global events and publishing company chose to move their physical and virtual desktops to the cloud, they quickly discovered they couldn’t do it alone.

By early 2016, Informa had more than 1,000 employees using Citrix virtual desktops. As that number had grown, the desktops and their support infrastructure became increasingly difficult to manage.

“Complexity is the biggest enemy in IT,” said Martin van Nijnatten, head of end-user computing at the London-based company. “That was the key argument for moving from doing your own VDI to desktop as a service.”

At the same time, the VDI user experience was getting worse.

“There was a big gap between the physical desktops and the VDI estate,” said Peter MacNamara, senior VDI engineer at Informa. “Your user experience would not be the same wherever you went.”

The end-user computing team decided to migrate from physical and virtual desktops to Amazon Web Services’ desktop as a service (DaaS) offering, WorkSpaces. The move was made possible by Liquidware, whose products — particularly its user experience monitoring software — identified potential problems and provided much-needed management capabilities for the new cloud desktops and applications.

“[With WorkSpaces], you don’t have the tools that Citrix and VMware have natively,” MacNamara said. “So we had to fill that gap. Liquidware, especially with their monitoring tool, let us do that.”

User experience monitoring gets proactive

It’s moving from being reactive to proactive.
Martin van Nijnattenhead of end-user computing, Informa

After selecting AWS, Informa evaluated several virtual desktop management and user experience monitoring vendors to assist with the migration. The company considered RES Software (which Ivanti has since acquired), Unidesk (which Citrix has since acquired) and FSLogix in addition to Liquidware.

After a proof-of-concept deployment that ran through late 2016, Liquidware won out. Its Liquidware Essentials Bundle — which includes Stratusphere for user experience monitoring, ProfileUnity for user environment management and FlexApp for application layering — provided the capabilities Informa needed, and it wasn’t overly complicated to use, MacNamara said. It took less than a day to set up Stratusphere, which is available as an appliance in the Amazon Marketplace, and get it monitoring the Citrix virtual desktops, he said.

Peter MacNamara, senior VDI engineer at InformaPeter MacNamara

The user experience monitoring tool immediately paid dividends, identifying applications that could potentially cause problems when they moved to the cloud. The performance hit that McAfee’s antivirus software caused on the virtual desktops, for example, would have been too much to bear on WorkSpaces, MacNamara said. Armed with this information, the IT department was able to address the issue before it affected users.

“It’s moving from being reactive to proactive,” van Nijnatten said.

Informa used the information gleaned from Stratusphere to right-size its Amazon WorkSpaces deployment, making sure it allocated enough resources so as to not cause any performance problems, said Dave Johnson, who worked with Informa on this project as a Liquidware sales manager. And Liquidware’s ProfileDisks feature helped Informa capture user profiles on physical and Citrix virtual desktops and migrate them to Amazon WorkSpaces, Johnson said.

The performance data Stratusphere provided proved so valuable that Informa rolled the product out to its physical desktops as well. There are some improvements that van Nijnatten said he would like to see, however. Tops on that list is the incorporation of machine learning technology.

“Right now, you still have to do a lot of digging and conclusion-drawing yourself by looking at the data,” he said. “I think that there’s an opportunity to collate that data and create some more intelligence out of it.”

Liquidware ProfileUnity screenshot

VDI-to-DaaS migrations catching on

For most of DaaS’ existence, organizations considered it almost exclusively for greenfield deployments. Migrating from VDI to DaaS was too complex, and it was a waste to abandon investments in on-premises virtual desktops, the thinking went.

That’s slowly changing. At Informa, it was more important to embrace the future than to hold on to the past, MacNamara said.

Informa is one of many Liquidware customers that have moved or are considering moving from VDI to DaaS, Johnson said.

“A number of organizations have moved their core infrastructure to the cloud, and now they’re looking at moving their desktops,” he said. “To the user, it’s a very minimal impact, because the look and feel of the desktop is the same.”

Informa has run its IT infrastructure on AWS for more than a decade, dating back to a time when “everybody said you were out of your mind” if you moved core services to the cloud, van Nijnatten said. That familiarity led the company to choose Amazon WorkSpaces over DaaS offerings from Citrix and VMware, because those vendors still have a certain level of reliance on their on-premises VDI products, he said.

“Amazon was born in the cloud, and Citrix and VMware [weren’t],” van Nijnatten added.

An ongoing process

Informa’s work with Liquidware and Amazon WorkSpaces is not complete; the company still plans to move the remaining pockets of Citrix users to AWS and is also in the process of migrating from Windows 7 to Windows 10. The scale of that operating system upgrade would have been impossible for Informa’s Citrix infrastructure to handle, van Nijnatten said.

“We would’ve needed to redesign the whole setup,” he said.

The ultimate goal is to offer nonpersistent cloud desktops that rely on Profile Unity to provide a consistent user experience and an added level of security.

“Now what we’re working towards is, you can log on to an Amazon workspace and your settings follow you,” MacNamara said. “Your documents follow you. It’s all there.”

I-Squared will help ensure the US has the skilled talent it needs to grow – Microsoft on the Issues

The lifeblood of Microsoft is and will always be our employees. Our company was built by a world-class team comprised of many of the best and brightest people, including many of the best software developers from around the world. High-skilled immigration has not only been important to the success of Microsoft and other individual tech companies, but in the global leadership position of the entire American tech sector. Our collective success won’t continue unless Congress reforms the nation’s immigration system into one that protects American workers while preserving the ability of American companies to continue to recruit the world’s best high-skilled talent.

That’s why we support new legislation introduced Thursday by Sen. Orrin Hatch and Sen. Jeff Flake that takes important steps to reduce the green card backlog, strengthen U.S. worker protections, prevent H-1B program abuse and raise new STEM training funds for Americans. At a time of such great discourse in our country around immigration, we believe that S.2344, the Immigration Innovation (I-Squared) Act, strikes the right balance to keep our economy strong, attract and retain top global talent and build more opportunities for American workers. We hope the Senate’s leaders will come together to support these important reforms.

One of the most important features of I-Squared is its focus on eliminating bottlenecks in the lengthy green card process for high-skilled workers. As we’ve stated previously as we’ve endorsed and spoken out for HR 392, current per country limits on employment-based green cards are arbitrary and create uncertainty and tremendous hardship for our employees and their families as they endure decades-long backlogs. This uncertainty is also not good for American businesses that want to retain this valuable talent in the country.

I-Squared eliminates those discriminatory per country limits. It also ensures that green card numbers that have gone unused in prior years due to bureaucratic processes are not wasted, but instead applied to reduce the existing backlog. I-Squared further proposes a new conditional green card process for a more direct path to permanent residence, giving more security to both employers and employees. If Congress could move forward and diminish the many uncertainties in the green card process, we could then focus even more effort on our work creating next generation technology.

I-Squared also takes significant steps to strengthen protections for American workers and prevent abuses of the H-1B program. The bill directly prohibits use of the H-1B program to displace American workers; it prohibits certain practices that currently get in the way of ensuring that H-1Bs that have been approved are actually used; and it implements more rigorous wage requirements. At the same time, the bill builds flexibility into the program to adjust at a measured pace to the market demand for high-skilled talent.

Particularly in today’s strong economy, we need to take additional steps to prepare Americans for digital jobs by investing in our domestic STEM training programs. Through the additional fees imposed by I-Squared, close to $1 billion additional dollars could be provided each year to states to support STEM education and build the country’s talent pipeline and support training for U.S. workers to enter STEM fields, including apprenticeships. As we’ve said before, these are fees that Microsoft is more than prepared to pay.

High-skilled immigration programs are critical to meeting our country’s need for skilled talent. But it needs to complement — not compete with — investing in the American workforce. The bill introduced by Senators Hatch and Flake hits the right note and makes the system better for all of us.

Tags: Brad Smith, Immigration, legislation

Hyper engine aims to give enterprise Tableau analytics a boost

Tableau is continuing its focus on enterprise functionality, rolling out several new features that the company hopes will make its data visualization and analytics software more attractive as an enterprise tool to help broaden its appeal beyond an existing base of line-of-business users.

In particular, the new Tableau 10.5 release, launched last week, includes the long-awaited Hyper in-memory compute engine. Company officials said Hyper will bring vastly improved speeds to the software and support new Tableau analytics use cases, like internet of things (IoT) analytics applications.

The faster speeds will be particularly noticeable, they said, when users refresh Tableau data extracts, which are in-memory snapshots of data from a source file. Extracts can reach large sizes, and refreshing larger files takes time with previous releases.

“We extract every piece of data that we work with going to production, so we’re really looking forward to [Hyper],” Jordan East, a BI data analyst at General Motors, said in a presentation at Tableau Conference 2017, held in Las Vegas last October.

East works in GM’s global telecom organization, which supports the company’s communications needs. His team builds BI reports on the overall health of the communications system. The amount of data coming in has grown substantially over the year, and keeping up with the increasing volume of data has been a challenge, he said.

Extracting the data, rather than connecting Tableau to live data, helped improve report performance. East said he hopes the extra speed of Hyper will enable dashboards to be used in more situations, like live meetings.

Faster extracts mean fresher analytics

The Tableau 10.5 update also includes support for running Tableau Server on Linux, new governance features and other additions. But Hyper is getting most of the attention. Potentially, faster extract refreshes mean customers will refresh extracts more frequently and be able to do their Tableau analytics on fresher data.

“If Hyper lives up to demonstrations and all that has been promised, it will be an incredible enhancement for customers that are struggling with large complex data,” said Rita Sallam, a Gartner analyst.

Sallam’s one caveat was that customers who are doing Tableau analytics on smaller data sets will see less of a performance upgrade, because their extracts likely already refresh and load quickly. She said she believes the addition of Hyper will make it easier to analyze data stored in a Hadoop data lake, which was typically too big to efficiently load into Tableau before Hyper. This will give analysts access to larger, more complex data sets and enable deeper analytics, Sallam said.

Focus on enterprise functionality risky

Looking at the bigger picture, though, Sallam said there is some risk for Tableau in pursuing an enterprise focus. She said moving beyond line-of-business deployments and doubling down on enterprise functionality was a necessary move to attract and retain customers. But, at the same time, the company risks falling behind on analytics functionality.

Sallam said the features in analytics software that will be most important in the years ahead will be things like automated machine learning and natural language querying and generation. By prioritizing the nuts and bolts of enterprise functionality, Tableau hasn’t invested as much in these types of features, Sallam said.

“If they don’t [focus on enterprise features], they’re not going to be able to respond to customers that want to deploy Tableau at scale,” Sallam said. “But that does come with a cost, because now they can’t fully invest in next-generation features, which are going to be the defining features of user experience two or three years from now.”

Google Cloud Platform services engage corporate IT

Google continues to pitch its public cloud as a hub for next-generation applications, but in 2017, the company took concrete steps to woo traditional corporations that haven’t made that leap.

Google Cloud Platform services still lag behind Amazon Web Services (AWS) and Microsoft Azure, and Google’s lack of experience with enterprise IT is still seen as GCP’s biggest weakness. But the company made important moves this year to address that market’s needs, with several updates around hybrid cloud, simplified migration and customer support.

The shift to attract more than just the startup crowd has steadily progressed since the hire of Diane Greene in 2015. In 2017, her initiatives bore their first fruit.

Google expanded its Customer Reliability Engineering program to help new customers — mostly large corporations — model their architectures after Google’s. The company also added tiered support services for technical and advisory assistance.

Other security features included Google Cloud Key Management Service and the Titan chip, which takes security down to the silicon. Dedicated Interconnect taps directly into Google’s network for consistent and secure performance. Several updates and additions highlighted Google’s networking capabilities, which it sees as an advantage over other platforms, such as a slower and cheaper networking tier Google claims is still on par with the competition’s best results for IT shops.

Google Cloud Platform services also expanded into hybrid cloud through separate partnerships with Cisco and Nutanix, with products from each partnership expected to be available in 2018. The Cisco deal involves a collection of products for cloud-native workloads and will lean heavily on open source projects Kubernetes and Istio. The Nutanix deal is closer to the VMware on AWS offering as a lift-and-shift bridge between the two environments.

And for those companies that want to move large amounts of data from their private data centers to the cloud, Google added its own version of AWS’ popular Snowball device. Transfer Appliance is a shippable server that can be used to transfer up to 1 TB of compressed data to Google cloud data centers.

In many ways, GCP is where Microsoft Azure was around mid-2014, as it tried to frame its cloud approach and put together a cohesive strategy, said Deepak Mohan, an analyst with IDC.

The price point is fantastic and the product offering is fantastic, but they need to invest in finding how they can approach the enterprise at scale.
Deepak Mohananalyst, IDC

“They don’t have the existing [enterprise] strength that Microsoft did, and they don’t have that accumulated size that AWS does,” he said. “The price point is fantastic and the product offering is fantastic, but they need to invest in finding how they can approach the enterprise at scale.”

To help strengthen its enterprise IT story, Google infused its relatively small partner ecosystem — a critical piece to help customers navigate the myriad low- and high-level services — through partnerships forged with companies such as SAP, Pivotal and Rackspace. Though still not in the league of AWS or Azure, Google also has stockpiled some enterprise customers of its own, such as Home Depot, Coca-Cola and HSBC, to help sell its platform to that market. And it also hired former Intel data center executive Diane Bryant as COO in November.

GCP also more than doubled its global footprint, with new regions in Northern Virginia, Singapore, Sydney, London, Germany, Brazil and India.

gcp services
Google Cloud Platform services

Price and features still matter for Google

Price is no longer the first selling point for Google Cloud Platform services, but it remained a big part of the company’s cloud story in 2017. Google continued to drop prices across various services, and it added a Committed Use Discount for customers that purchase a certain monthly capacity for one to three years. Those discounts were particularly targeted at large corporations, which prefer to plan ahead with spending when possible.

There were plenty of technological innovations in 2017, as well. Google Cloud Platform was the first to use Intel’s next-gen Skylake processors, and several more instance types were built with GPUs. The company also added features to BigQuery, one of its most popular services, and improved its interoperability with other Google Cloud Platform services.

Cloud Spanner, which sprang from an internal Google tool, addresses challenges with database applications on a global scale that require high availability. It provides the consistency of transactional relational databases with the distributed, horizontal scaling associated with NoSQL databases. Cloud Spanner may be too advanced for most companies, but it made enough waves that Microsoft soon followed with its Cosmos DB offering, and AWS upgraded its Aurora and DynamoDB services.

That illustrates another hallmark of 2017 for Google’s cloud platform: On several fronts, the company’s cloud provider competitors came around to Google’s way of thinking. Kubernetes, the open source tool spun out of Google in 2014, became the de facto standard in container orchestration. Microsoft came out with its own managed Kubernetes service this year, and AWS did the same in late November — much to the delight of its users.

Machine learning, another area into which Google has pushed headlong for the past several years, also came to the forefront, as Microsoft and Amazon launched — and heavily emphasized — their own new products that require varying levels of technical knowhow.

Coming into this year, conversations about the leaders in the public cloud centered on AWS and Microsoft, but by the end of 2017, Google managed to overtake Microsoft in that role, said Erik Peterson, co-founder and CEO of CloudZero, a Boston startup focused on cloud security and DevOps.

“They really did a good job this year of distinguishing the platform and trying to build next-generation architectures,” he said.

Azure may be the default choice for Windows, but Google’s push into cloud-native systems, AI and containers has planted a flag as the place to do something special for companies that don’t already have a relationship with AWS, Peterson said.

Descartes Labs, a geospatial analytics company in Los Alamos, N.M., jumped on Google Cloud Platform early on partly because of Google’s  activity with containers. Today, about 90% of its infrastructure is on GCP, said Tim Kelton, the company’s co-founder and cloud architect. He is pleased not only with how Google Container Engine manages its workloads and responds to new features in Kubernetes, but how other providers have followed Google’s lead.

“If I need workloads on all three clouds, there’s a way to federate that across those clouds in a fairly uniform way, and that’s something we never had with VMs,” Kelton said.

Kelton is also excited about Istio, an open source project led by Google, IBM and Lyft that sits on top of Kubernetes and creates a service mesh to connect, manage and secure microservices. The project looks to address issues around governance and telemetry, as well as things like rate limits, control flow and security between microservices.

“For us, that has been a huge part of the infrastructure that was missing that is now getting filled in,” he said.

Trevor Jones is a senior news writer with SearchCloudComputing and SearchAWS. Contact him at tjones@techtarget.com.

Nuffield Health finds SD-WAN deployment worth the added cost

Nuffield Health, the United Kingdom’s largest nonprofit healthcare company, didn’t spend less on networking when it switched its 31 hospitals and 158 fitness centers and medical clinics to SD-WAN. In fact, the overall cost increased by 20%.

“It wasn’t a savings,” said Dan Morgan, the IT operations director at Nuffield, based in Epsom, U.K. “We were spending slightly more than we were spending before, but we’re getting far more for it.”

The return from the SD-WAN deployment is not measured in reducing current costs. Instead, the technology is about the future.

In 2015, Nuffield decided to embrace cloud-based software to eliminate one of two data centers over time. Applications chosen by the company included Microsoft’s Office 365, SharePoint, Skype for Business and Teams.

Other products included TrakCare, an electronic medical record system from InterSystems Corp., and GymManager, a system developed by Sharptec for running sports facilities. Nuffield operates more than two dozen gyms with medical centers that provide rehab for injuries, weight control and health assessments.

Dan Morgan, IT operations director, Nuffield HealthDan Morgan

Going online for software meant the single 30 Mbps MPLS link at each of Nuffield’s facilities was no longer adequate. Instead, the company wanted two 100 Mbps internet broadband links at each site — a more than sixfold increase in bandwidth.

“As soon as you start looking at that in an MPLS world, you’re then looking at double the cost, and probably triple the cost,” Morgan said.

Deploying SD-WAN

The network overhaul demanded new technology for routing traffic, so Nuffield chose Silver Peak’s SD-WAN. In general, the product creates a virtual overlay that abstracts the underlying private or public WAN connections, such as MPLS, broadband, fiber, wireless or Long Term Evolution. Network operators manage the traffic through the software console that comes with the system’s central controller.

Installing the SD-WAN appliance was easy enough for Nuffield to have one running on the LANs of each of the 189 sites within four months, Morgan said. Speed was essential because Nuffield wanted to switch facilities to broadband before its MPLS contracts expired.

The most significant problem was the installation of the optical fiber that would carry the broadband. If the cable wasn’t available at a facility, then Nuffield had to get approval from government regulators and landowners to have it installed. At the health facility in Cardiff, Wales, for example, Nuffield had to get permission from four farmers to dig up their fields to lay fiber to the center.

At another site, Nuffield lost the MPLS service before the broadband connection was up. So, a non-technical project manager stuffed the SD-WAN appliance and two 4G dongles in a knapsack and flew to the rural location.

Once connected to the LAN, the appliance re-established the internet connection after downloading the preset configurations from the controller. Nuffield’s LANs use mostly Cisco Catalyst switches.

“The site was up and running over a pair of 4G dongles and ran like that quite happily for a good couple of weeks until the new link was ready to get plugged in,” Morgan said.

Lessons learned from the SD-WAN deployment

In hindsight, Morgan would have preferred six more months with the MPLS links. That way, he could have set up each SD-WAN appliance with those connections and change to the new ones when they were ready.

Another gotcha for Morgan was failing to have a full understanding of how each device communicates with the network, especially hardware that hospitals might use for years. Examples include magnetic resonance imaging (MRI) equipment and computed tomography (CT) scanners used to diagnose tumors.

“You end up having to retrospectively fix it, rather than accounting for it in the migration piece,” Morgan said.

Now that the SD-WAN deployment is done, Nuffield is working on migrating all its hospitals and gym-based medical centers to the online TrakCare EMR system. Also, the nonprofit will gradually consolidate on-premises applications in one data center. Nuffield plans to finish both projects within four years.

Nuffield will run its business operations on the broadband connections. The SD-WAN appliances will also have a 1 Gbps link available for the multi-gigabit size files CT scanners, MRIs and X-ray machines create.

Nuffield will store and access those files through its private data center, which is a less expensive option than using a public cloud provider, such as Microsoft Azure, Morgan said.

“With cloud hosting platforms, such as Azure, you have to be careful of the data volumes that are going in and out to keep an eye on your costs,” Morgan said. “At the moment, things like Azure are still quite expensive to host that sort of environment.”

So, while SD-WAN didn’t cut Nuffield’s networking cost, it is providing a more substantial bang for the buck.

Stella.ai launches a new kind of recruitment marketplace

Stella.ai is a new company that asks employers to pool their knowledge worker job applicants in a recruitment marketplace. It might be odd to expect potential competitors to share their applicants, but this Redwood City, Calif., “shared talent network” firm said it is launching with 100 participating firms.

It works like this: A job applicant applies for a job at a firm participating in Stella.ai’s shared talent network. The companies in the network have agreed to invite their job applicants to join Stella.ai. Automation is used to screen the candidates, and once this vetting takes place the candidates are only allowed to see jobs where they are likely to be successful.

Stella.ai’s automation approach has capabilities similar to some in-house recruiting management systems. It gathers data about incumbent job roles and uses it to help understand what the employer actually needs. Stella.ai, which claims its systems are AI-based, integrates its platform with employer applicant tracking systems. Humans play a role if an applicant’s threshold score isn’t decisive.

Adding contact list may aid job matches

Stella.ai’s recruitment marketing service adds a capability that a solo employer may not have. If the job applicant makes his contact list available, Stella.ai said its system may be able to make a better job match.

“We look at the percentage of friends that you have in different geographies and the percentage of friends that you have in different industries,” said Richard Joffe, Stella.ai’s CEO and co-founder.

A job applicant might believe that the industry where they have strongest potential is healthcare, for instance. “But what if it turns out that 20% of your friends are actually in media? Maybe you would be more open to a job at HBO,” Joffe said.

HBO is one of the firms participating in Stella.ai’s recruitment marketplace, as is Unilever, Hilton, JetBlue, Rackspace, Allergan, Estee Lauder Companies, among others. In total, Stella.ai believes that the firms in its network will send out some 30 million invitations to job applicants, and it expects that about 3 million job applicants will join its network in 2018.

Stella.ai said it has raised more than $10 million for its recruitment marketplace. It received funding from the Defense Advanced Research Projects Agency (DARPA), which Joffe said is investigating its use in veteran job matching. DARPA did not respond by press time for comment.

Analysts cautious in their assessment

Analysts haven’t been briefed by the company, and until they learn more, are cautious in their assessment of this recruitment marketplace or shared talent network.

Holger Mueller, principal analyst at Constellation Research, said there may be some similarities to Hired.com, which created an online talent pool of people who were screened and job seeking. The firm uses algorithmic matching to help find the right person for the job.

But having one application that can reach multiple vendors, Mueller said “is very different than finding the one that works for you and apply for each one by one.”

Independent HR technology analyst George LaRocque wondered whether a model that involves sharing applicants with a competitor will be embraced. But he also said that change has been happening so rapidly in the recruiting area, he can’t rule out any new approach.

Some of the factors driving change in HR recruiting are generational — people are more willing to share information today, especially Millennials, LaRocque said. “They’re more open to the concept; they’re less intimidated by it.”

Along with providing resumes and opting in to share contact lists, Stella.ai also gives its job applicants the option to take a personality test, which Joffe said may help “unlock” other job opportunities; in other words, the candidate will be shown more jobs.

The firm claims that the odds of getting an interview through the platform is one in eight versus one in 100, if an applicant applied to a company directly.

 “Every piece of data that the candidate provides can only be used to help them,” Joffe said.

Azure Backup service adds layer of data protection

more important to have a solid backup strategy for company data and workloads. Microsoft’s Azure Backup service has matured into a product worth considering due to its centralized management and ease of use.

Whether it’s ransomware or other kinds of malware, the potential for data corruption is always lurking. That means that IT admins need a way to streamline backup procedures with the added protection and high availability made possible by the cloud.

Azure Backup protects on-premises workloads — SharePoint, SQL Server, Exchange, file servers, client machines, VMs, and cloud resources like infrastructure-as-a-service VMs — into one recovery vault with solid data protection and restore capabilities. Administrators can monitor and start backup and recovery activities from a single Azure-based portal. After the initial setup, this arrangement lightens the burden on IT because off site backups require minimal time and effort to maintain.

How Azure Backup works

The Azure Backup service stores data in what Microsoft calls a recovery vault, which is the central storage locker for the service whether the backup targets are in Azure or on premises.

Whether it’s ransomware or other kinds of malware, the potential for data corruption is always lurking.

The administrator needs to create the recovery vault before the Azure Backup service can be used. From the Azure console, select All services, type in Recovery Services and select Recovery Services vaults from the menu. Click Add, give it a name, associate it with an Azure subscription, choose a resource group and location, and click Create.

From there, to back up on-premises Windows Server machines, open the vault and click the Backup button. Azure will prompt for certain information: whether the workload is on premises or in the cloud and what to back up — files and folders, VMs, SQL Server, Exchange, SharePoint instances, system state information, and data to kick off a bare-metal recovery. When this is complete, click the Prepare Infrastructure link.

[embedded content]

Configure backup for a Windows machine

The Microsoft Azure Recovery Services Agent (MARS) handles on-premises backups. Administrators download the MARS agent from the Prepare Infrastructure link — which also supplies the recovery vault credentials — and install it on the machines to protect. MARS picks up the recovery vault credentials to link the MARS agent instances of the on-premises machine to the Azure subscription and attendant recovery vault.

Azure Backup pricing

Microsoft determines Azure Backup pricing based on two components: the number of protected VMs or other instances — Microsoft charges for each discrete item to back up — and the amount of backup data stored within the service. The monthly pricing is:

  • for instances up to 50 GB, each instance is $5 per month, plus storage consumed;
  • for instances more than 50 GB, but under 500 GB, each instance is $10, plus storage consumed; and
  • for instances more than 500 GB, each instance is $10 per nearest 500 GB increment, plus storage consumed.

Microsoft bases its storage prices on block blob storage rates, which vary based on the Azure region. While it’s less expensive to use locally redundant blobs than geo-redundant blobs, local blobs are less fault-tolerant. Restore operations are free; Azure does not charge for outbound traffic from Azure to the local network.

Pros and cons of the Azure Backup service

The service has several features that are beneficial to the enterprise:

  • There is support to back up on-premises VMware VMs. Even though Azure is a Microsoft cloud service, the Azure Backup product will take VMware VMs as they are and back them up. It’s possible to install the agent inside the VM on the Windows Server workload, but it’s neater and cleaner to just back up the VM.
  • Administrators manage all backups from one console regardless of the target location. Microsoft continually refines the management features in the portal, which is very simple to use.
  • Azure manages storage needs and automatically adjusts as required. This avoids the challenges and capacity limits associated with on-premises backup tapes and hard drives.

The Azure Backup service isn’t perfect, however.

  • It requires some effort to understand pricing. Organizations must factor in what it protects and how much storage those instances will consume.
  • The Azure Backup service supports Linux, but it requires the use of a customized copy of System Center Data Protection Manager (DPM), which is more laborious compared to the simplicity and ease of MARS.
  • Backing up Exchange, SharePoint and SQL workloads requires the DPM version that supports those products. Microsoft includes it with the service costs, so there’s no separate licensing fee, but it still requires more work to deploy and understand.

The Azure Backup service is one of the more compelling administrative offerings from Microsoft. I would not recommend it as a company’s sole backup product — local backups are still very important, and even more so if time to restore is a crucial metric for the enterprise — but Azure Backup is a worthy addition to a layered backup strategy.

SD-WAN a tool for combining networks, engineer says

It’s a good thing to work at a growing company. But it can be a bad thing when that growing company acquires another, and you’re the one charged with combining networks into a cohesive whole.

Ethan Banks, writing in PacketPushers, said the pressure to integrate applications and systems can be intense, leading engineers to cobble together quick-and-dirty options to keep the data flowing. But those options — say, a quick-and-dirty IPsec tunnel — can cause headaches later on.

Yet, there might be another approach to ease the pain associated with combining networks: SD-WAN. Software-defined WAN can be the glue engineers are looking for, Banks said. Among the technology’s advantages, it’s easily managed, offers redundant connectivity and it supports the Interior Gateway Protocol, including the use of a dynamic multipoint virtual private network. In addition, Banks said SD-WAN permits network segmentation and service chaining. Banks also listed some caveats, among them cost and complexity.

Still, he said, “I see SD-WAN as a way to onboard an acquired network permanently, while retaining the fast time to connect that an IPsec tunnel offers. For organizations who already have an SD-WAN solution in place, there’s not much to think about. For organizations who haven’t invested in SD-WAN yet, this might be an additional driver to do so.”

Read what else Banks has to say about using SD-WAN as a tool for combining networks.

Juniper’s embrace of automation and what to expect

Dan Conde, an analyst with Enterprise Strategy Group in Milford, Mass., said he expects Juniper Networks to use its annual conference to shed more light on its Self-Driving Networks initiative. The company last week released details about a trio of bot apps engineered to automate telemetry, auditing and peer monitoring, which will be released early next year.

Conde said he believes this is just the start. “Juniper has been an advocate of automation for a while,” he said, citing first-generation devices that relied on APIs instead of command-line interfaces to program them. “Automation is nothing new to them.”

What is new is layering intelligence to automation, giving the software the ability to adjust network performance as needed. Conde said it’s immaterial what role the intelligence is used for — whether it’s to check configuration or status. What is important is automating as many processes as possible.

“I look forward to a day when even more items get automated, and when IT pros will someday leave behind their skepticism and conservatism on automation and embrace timesavers that make their lives easier,” Conde said.

Dig deeper into Conde’s thoughts about Juniper’s strategy.

What Google’s nifty chip may say about AI

So, Google’s nifty AlphaZero computer algorithm not only learned how to play chess in four hours, it went on to demolish Stockfish — known as the highest-rated chess computer — in a match of the gadgets.

After 100 games, it was AlphaZero 28-0 over Stockfish, with 72 draws, said Brad Shimmin, research director at Current Analysis Inc. in Sterling, Va., in a recent post. But it wasn’t about the game. Instead, it was about the chips.

Google engineered AlphaZero with 5,000 AI-specific TensorFlow processing units (TPUs), which the machine used to “learn” how to play chess. The machine also had 64 second-generation TPUs that provided the necessary neural training. Once the games began, Google stripped AlphaZero down to only 4 TPUs, which is all the machine needed to defeat Stockfish.

“AlphaZero’s mastery of chess stemmed from the sheer, brute force of Google’s AI-specific TRUs,” Shimmin said, adding that each TRU can deliver up to 225,000 predictions per second. A conventional CPU, by contrast, can only churn out 5,000 predictions per second. “It is this hardware-driven ability to iteratively learn at speed that unlocks the door to AI’s potential,” Shimmin said. “That’s where we’ll see the most innovation and competition over the coming year as vendors speed up AI through purpose-build hardware.”

Check in with Shimmin to read more about what Google is trying to do.