Tag Archives: computing

How to Use Azure Spot to Reduce Azure Virtualization Costs

One of the key benefits of cloud computing is the resource optimization achieved through hardware abstraction. This article explains how Azure Spot can provide a scalable virtualization management solution saving you from using unnecessary VMs.

What is Azure Spot?

When a workload becomes virtualized, it is no longer bound to any specific server and can be placed on any host with available and compatible resources. This allows organizations to consolidate their hardware and reduce their expenses by optimizing the placement of these virtual machines (VMs) so that they are most efficiently using the underlying hardware. For example, a server running several VMs with only 50% resource utilization could likely host a few more VMs or other services with its remaining capacity. If additional capacity is needed, the lowest priority VMs could be evicted from the host and replaced with more important workloads. Today this is a common practice today in private clouds where the admin has access to the hosts, however, it has been challenging to do this in a public cloud where access to the physical hardware is not provided to users.

Microsoft Azure recently announced the general availability of Azure Spot Virtual Machines, which enables low priority VMs to be evicted and shut down when the host’s resources are needed. This provides equivalent features as Amazon Web Services (AWS) EC2 Spot Instances and Google Cloud Platform (GCP) Preemptible VMs. Similar functionality was provided by Microsoft Azure in the past through Virtual Machine Scale Set (VMSS) using low priority VMs. Spot VMs are replacing low priority VMs and any existing VMSS low priority VMs should be migrated to Spot VMs.

Azure Spot VMs should only be used in specific scenarios, however, they are very cheap to run. Essentially these Spot VMs run on “unsold” cloud capacity, so they are significantly discounted (up to 90%) and can be assigned a capped maximum price. We will now discuss an overview of the technology and key workloads you should move to Azure Spot VMs to minimize your operating costs. If you are a managed service provider (MSP), then these are recommendations that you can pass along to your tenants.

Azure Spot VM Workloads

The most important characteristic of a Spot VM is that it can be tuned off and evicted at any time by another VM when its resources are needed or if the cost becomes too high. This is a hard shutdown, which is equivalent to switching off the power, so you will only have a limited window to save any data or the state of the VM. This means that VMs which are being accessed by customers or contain data that you want to retain are generally not suitable. Here are the most common workloads or tasks deployed with Windows or Linux Spot VMs.

  • Stateless: A service which just serves a single purpose, and is not required to store any state, such as a web server with static content.
  • Interruptible: A service that can be stopped at any time, such as with brute force testing application.
  • Short: A service that can run quickly, such as scalable application.
  • Batch: A service that collects then batches a series of tasks to maximize hardware utilization for a short burst, such as a scale computation.
  • Untimed: A service that contains many tasks which can take a long time to complete, but has no specific deadline for completion.
  • Dev/Test: A service that is commonly used for continuous integration or delivery for a development team.

A Spot VM functions just like a pay-as-you-go VM, with the exception that it can be evicted. If you decide to use Spot VMs for different types of workloads, this is fully supported, but you accept the risks that come with it. 30 seconds before an eviction happens, a notification is provided which can be detected by the system so it can try to save data or close connections before the Spot VM is stopped. You could try to create VM checkpoints at regular intervals, which will allow you to restore a Spot VM to its last saved state, however, you would pay for the storage capacity for the virtual hard disk and its checkpoint(s), even if the VM is offline.

Deploying an Azure Spot VM

It is simple to configure a Spot VM in any Azure region. When creating a new VM using the Azure Portal, Azure PowerShell, Azure CLI or an Azure Resource Manager (ARM) Template, you can specify that the VM should use an Azure Spot instance. Using the Portal as the example, you will find that the Basics > Instance details tab lets you enable the Spot VM functionality (it is disabled by default). If you select a Spot VM, you can define the eviction policy and price controls, which are covered in a later section of this article. The following screenshot shows these settings. You can then create and manage the VM like any other Windows or Linux VM running in Azure.

Deploying an Azure Spot VM

Figure 1 – Creating an Azure Spot VM from the Azure Portal

Evicting an Azure Spot VM

While an Azure Spot VM gives you a great deal on pricing, the VM can be shut down at any time and preempted by another VM from a customer willing to pay more, or if the cost exceeds the maximum price you have agreed to pay. You will receive a 30-second warning through the event channel that an eviction is about to happen, which may give you a chance to save some data or even take a snapshot, but this should not be used as a reliable way to retain important data. To see these notifications you can subscribe to Scheduled Events, which can trigger any type of task or command, such as attempting to save the data.

When you create the Spot VM, you must select either the Deallocate (default) or Delete eviction policy. The Delete option will permanently delete the VMs and its disks and there will be no additional costs. A Deallocate policy means that the VM has been evicted and placed into a Stopped-Deallocated State, and its VHD file is saved. A deallocated VM can be redeployed later if capacity frees up – although there is not a guarantee that this will ever happen. You also have to pay for the storage used by this file, so be careful if you are running a lot of Spot VMs with this setting. If you want to redeploy that VM, you will have to restart it from the Azure Portal or using Azure PowerShell or CLI. Even if the price goes down in the future, it will not automatically restart.

Virtual Machine Scale Sets (VMSS) and Azure Spot VMs

Azure Spot VMs are fully integrated and compatible with VM Scale Sets (VMSS). A VMSS lets you run a virtualized application behind a load-balancer that can dynamically scale up or down based on user load. By using Spot VMs with a VMSS, you can scale up only when the costs are below the maximum price which you have assigned. To create a Spot VMs set, you can set the priority flag to Spot via the portal, PowerShell, CLI, or in the Azure Resource Manager (ARM) template. At this time, an existing scale set cannot be converted into a Spot scale set, this configuration must be enabled when the scale set is created. To ensure that the set does not grow too large, a Spot VM quota can be applied to the scale set. Automatically scaling the set using the autoscale rules is fully supported, however, any deallocated Spot VMs will count against the quota, even if they are not running. For this reason, it is recommended to use the Delete eviction option for the Spot VMs within a VMSS.

Pricing for Azure Spot VMs

When you create a Spot VM, you will see the current price for the region, image, and VM size which you have selected. The price is always displayed in US Dollars for consistency and transparency, even if you are using a different default currency for billing. The discounted price can range significantly, up to 90% cheaper than the base price. Keep in mind that if you are flexible on the type, size, or location of the VM, that you can browse different options to see the cheapest price.

The current price can increase as more hardware is requested by other users, so you can also set a max price that you are willing to pay. So long as the current price is below the capped price, the VM will stay running. However, if the price becomes more expensive than your max price, the VM will be evicted according to the policy you have specified. If you want to change the max price, you have to first deallocate the VM, update the price, then restart the VM. If you do not care what the max price is, so long as it is cheaper than the standard price, then you can set the max price to -1. In this scenario, you will pay either the discounted price or the standard price, whichever is cheaper.

Quotas for Azure Spot VM

Microsoft Azure has a concept called quotas which allows you to assign a maximum number of VM resources (vCPUs) to a particular user or group. In order to simplify management, Spot VMs have their own quota category which is separate from regular VMs. This allows admins or managed service providers (MSPs) to ensure that only a maximum number of Spot VMs are running, so that they do not accidentally over-deploy VMs which exceed their need or budget. This single quota is used for Spot VMs and Spot VMSS instances for all VMs in an Azure region. The following screenshot shows the Quota Details page for Spot VMs.

Assigning a Quota to Spot VMs

Figure 2 – Assigning a Quota to Spot VMs

Licensing for Azure Spot VMs

For the most part, Azure Spot VMs are the same as standard VMs. All VMs are supported as Spot VMs, excluding the B-Series and promo versions (Dv2, NV, NC, H). Spot VMs are also available in all regions, except China. Spot VMs are also not available with free trials nor as a benefit. Cloud Service Providers (CSPs) and managed service providers (MSPs) may offer Spot VMs as a service to their tenants.


Spot VMs will become a popular feature in Azure for any type of recommended workloads. This is another great example of how Microsoft is providing its customers with more deployment options and reducing costs while maximizing their hardware utilization. Are there any other types of workloads you think would work well with Spot VMs? Let us know by posting in the comments section below.

Go to Original Article
Author: Symon Perriman

Honeywell quantum computing system passes IBM out of the gate

Honeywell International Inc. jumped into the quantum computing race this week with a system that uses trapped ion technology.

The new Honeywell quantum computing system, which has a Quantum Volume of 64 , is double that of existing quantum systems from companies such as IBM and D-Wave Systems. The company expects to deliver the system in 90 days.

The company attributes the system’s 64 rating to its new quantum charge coupled device (QCCD) architecture, which will allow Honeywell to increase its Quantum Volume by an order of magnitude each year for the next five years.

“The performance metric the [quantum computing] community is agreeing on now is Quantum Volume,” said Tony Uttley, president of Honeywell’s quantum solutions group. “We have seen over time that it matters more how low an error rate your system has, not just how many physical qubits you have. The right question to ask is how many effective qubits do you have.”

Honeywell chose the trapped ion approach, which is similar to the approach startup IonQ employs in its quantum system, because it allows you to start with “a perfect qubit,” Uttley said.

“When you start with a perfect qubit in your system, any errors that then occur can be more easily traced back to things you put into the surrounding infrastructure,” Uttley said. “What Honeywell is good at is taking a systems engineering approach to complex system design, so we are aware of all the potential entry points of error.”

Some analysts believe the new QCCD architecture the system uses could give Honeywell at least a temporary lead in the game of performance leapfrog many quantum system makers find themselves in. But Honeywell will have to keep leaping if they hope to maintain that lead over time.

“They have to do some things beyond the [QCCD architecture] in order to achieve these huge Quantum Volume numbers they are talking about, like and integrating more capabilities at the chip level,” said Paul Smith-Goodson, analyst-in-residence for quantum computing at Moor Insights & Strategy. “But what they have right now looks pretty solid going forward.”

They have to do some things beyond the [QCCD architecture] in order to achieve these huge Quantum Volume numbers … but what they have right now looks pretty solid going forward.
Paul Smith-GoodsonAnalyst-in-residence for quantum computing, Moor Insights & Strategy

The technologies in the new Honeywell quantum computing system began development 10 years ago, according to Uttley. Some of those technologies are borrowed from its various control systems, a market the company has built a reputation in decades ago.

“As [quantum systems] get bigger and start to resemble process control plants, that plays to our core strength,” Uttley said. “Being able to control massively complex systems in a way that simplifies an operation you need to do is something we have a long history with.”

Another analyst agreed that Honeywell’s expertise in developing and manufacturing control systems gives them a technology advantage over quantum computing competitors that have never ventured into that business.

“Their experience in precision manufacturing and environmental controls should allow them to create a quantum system that blocks out more environmental noise which, in part, helps them achieve higher Quantum Volume,” said James Sanders, a cloud transformation analyst with 451 Research.

Along with the new system, Honeywell’s venture capital group, Honeywell Ventures, has made an investment in Cambridge Quantum Computing and Zapata Computing Inc., both producers of quantum software and quantum algorithms that will work jointly with Honeywell. Cambridge Quantum Computing focuses on a number of markets including chemistry, machine learning and augmented cybersecurity. Zapata’s algorithms focus on areas such as simulation of chemical reactions, machine learning and a range of optimization problems.

“We already work in vertical markets we believe will be profoundly impacted by quantum computing, like the aerospace, chemicals, and oil and gas industries,” Uttley said. “We already have domain experts in areas now that will focus on use cases applicable to quantum computing.”

The company is also partnering with JPMorgan Chase to develop quantum algorithms using Honeywell quantum computing. Last fall, Honeywell announced a partnership with Microsoft that will see the software giant provide cloud access to Honeywell’s quantum system through Microsoft Azure Quantum services.

Go to Original Article

Quantum computing strides continue with IBM, Q Network growth

IBM kept its quantum computing drumbeat going at the Consumer Electronics Show this week with news that it has more than doubled the number of IBM Q Network users over the past year and has signed with Daimler AG to jointly develop the next generation of rechargeable automotive batteries using quantum computers.

Some of the latest additions to the IBM Q Network include Delta Airlines, Goldman Sachs and the Los Alamos National Laboratory. The multiyear deal IBM signed with Delta is the first agreement involving quantum computing in the airline industry, officials from each company said. The two companies will explore developing practical applications to solve problems corporate IT shops and their respective users routinely face every day.

Delta joined the IBM Q Network through one of IBM’s Quantum Computing Hub organizations — in this case, North Carolina State University — where IBM can work more closely with not just user organizations in that region, but academic institutions as well. IBM believes Delta can also make meaningful contributions toward improving quantum computing skills as well as generally build a greater sense of community among a diverse set of organizations.

“Delta can work more closely with key professors and the academic research arm of N.C. State to improve their ability to teach students on a number of quantum technologies,” said Jamie Thomas, general manager overseeing the strategy and development of IBM’s Systems unit. “[Delta] can also offer up experts to many of the regional organizations in the southeast [United States] and collaborate with Research Triangle Park on a number of projects.”

Jamie ThomasJamie Thomas

Similarly, the Oak Ridge National Laboratory serves as a quantum computing hub working with other national labs as well as with academic institutions, including the Georgia Institute of Technology.

“You can see the relationships and ecosystems building (through the network of quantum hubs), which is important because they are all working to solve concrete problems that is necessary to increase the general maturation of quantum computing in regions across the country,” Thomas said.

IBM doubles Quantum Volume

In other quantum computing-related news, IBM officials said they have achieved another scientific milestone with its highest Quantum Volume to date of 32, doubling the previous high of 16. Company officials believe the Quantum Volume metric is a truer measurement of performance because it takes into consideration more than just the raw speed of its quantum computers. According to Thomas, it is a major step along the way to accomplishing Quantum Advantage, which is the ability to solve complex problems that are beyond the abilities of classical systems.

“This milestone is not only important to us because it edges us closer to Quantum Advantage, but because it means we have kept our commitment to double Quantum Volume on an annual basis,” Thomas said.

I think IBM got a bit upset over those recent claims by Google of achieving Quantum Advantage and so they are taking this opportunity [at CES] to reinforce their point about Quantum Volume.
Frank DzubeckPresident, Communications Network Architects

What helped IBM achieve its goal was the introduction of its 53-qubit quantum system last fall, in concert with improved qubit connectivity, a better coherence rate and enhanced air mitigation capabilities, all of which are essential measurements in raising the Quantum Volume number, Thomas said.

“Underneath these metrics is also things like improving the ability to manage these systems, increasing their resiliency and how all the electronics interact with the processor itself,” she said.

One analyst also believes that Quantum Volume is a more practical measure of a quantum system’s power, saying that too many vendors of quantum systems are focused on their machines’ speeds and feeds — similar to the performance battles waged among competing server vendors 10 and 20 years ago. He added this isn’t a practical metric given the nature of quantum science compared to classical architectures.

“Companies such as Google are focusing on Quantum Advantage from a speeds-and-feeds perspective, but that can be just a PR game,” said Frank Dzubeck, president of Communications Network Architects Inc. “I think IBM got a bit upset over those recent claims by Google of achieving Quantum Advantage and so they are taking this opportunity [at CES] to reinforce their point about Quantum Volume,” he said.

Quantum revs car batteries

IBM has also begun working jointly with Daimler to develop the next generation of automotive batteries. The two companies said they are using quantum computers to simulate the chemical makeup of lithium-sulfur batteries, which they claim offers significantly higher energy densities than the current lithium-ion batteries. Officials from both companies said their goal is to design a next-generation rechargeable battery from the ground up.

“The whole battery market is hot across all industries, particularly among automobiles,” Thomas said. “The key is finding different paths to create a battery that maintains its energy for longer periods of time and is more cost-effective for the masses.

Another benefit to using lithium-sulfur batteries is it eliminates the need for cobalt, a material that is largely found in the Democratic Republic of the Congo, formerly known as the Belgian Congo. Because that country and the immediately surrounding territories are often war torn, supply of the materials at times can be constrained.

“The important considerations here are with no cobalt necessary, not only will the supply constraints disappear, but it makes these batteries for automobiles and trucks a lot less expensive,” Dzubeck said.

Go to Original Article

Quantum computing in business applications is coming

Quantum computing may still be in its infancy stages, but it’s rapidly approaching and is already showing significant potential for business applications across several industries.

At MIT Technology Review’s Future Compute conference in December 2019, Alan Baratz, CEO of D-Wave Systems Inc., a Canadian quantum computing company, discussed the benefits of quantum computing in business applications and the new capabilities it can offer.

Editor’s note: The following has been edited for clarity and brevity.

Why should CIOs be thinking about quantum computing in business applications?

Alan Baratz: Quantum computing can accelerate the time to solve the hard problems. If you are in a logistics business and you need to worry about pack and ship or vehicle scheduling; or you’re in aerospace and you need to worry about crew or flight scheduling; or a drug company and you need to worry about molecular discovery computational chemistry, the compute time to solve those problems at scale can be very large.

Typically, what companies do is come up with heuristics — they try to simplify the problem. Well, quantum computing has the potential to allow you to solve the complete problem much faster to get better solutions and optimize your business.

Would you say speed is considered as the most significant factor of quantum computing?

Baratz: Well, sometimes it’s speed, sometimes it’s quality of the solution [or] better solution within the same amount of time. Sometimes, it’s diversity of solution. One of the interesting things about the quantum computer is that maybe you don’t necessarily want the optimal solution but [rather] a set of good solutions that you can then use to optimize other things that weren’t originally a part of the problem. The quantum computer is good at giving you a set of solutions that are close to optimal in addition to the optimal solution.

What’s limiting quantum computing in terms of hardware?

Baratz: Well, up until now, in [D-Wave’s] case, it’s been the size and topology of the system because in order to solve your problem, you have to be able to map it onto the quantum processor. Remember, we aren’t gate-based, so it’s not like you specify the sequence of instructions or the gates. In order to solve your problem in our system, you have to take the whole problem and be able to map it into our hardware. That means with 2,000 qubits and each qubit connected to six others, there’s only certain size problems that you can actually map into our system. When we deliver the Advantage system next year, we double that — over 5,000 qubits [and] each qubit connected to 15 others — so, that will allow us to solve significantly larger problems.

In fact, for a doubling of the processor size, you get about a tripling of the problem size. But we’ve done one other thing: We’ve developed brand new hybrid algorithms [and] these are not types of hybrid algorithms that people typically think about. It’s not like you try to take the problem and divide it into chunks and solve [them]. This is a hybrid approach where we use a classical technique to find the core of the problem, and then we send that off to the quantum processor. With that, we can get to the point where we can solve large problems even with our 5,000-qubit system. We think once we deliver the 5,000-qubit system Advantage [and] the new hybrid solver, we’ll see more and more companies being able to solve real production problems.

Do you think companies are in a race toward quantum computing?

Baratz: No, they’re not because in some cases they’re not aware, and in some cases they don’t understand what’s possible. The problem is there are very large companies that make a lot of noise in the quantum space and their approach to quantum computing is one where it will take many, many years before companies can actually do something useful with the system. And because they’re the loudest out there, many companies think that’s all there is. We’ve been doing a better job of getting the D-Wave message out, but we still have a ways to go.

What excites you most about quantum computing in business settings?

Baratz: First, just the ability to solve problems that can’t otherwise be solved to me is exciting. When I was at MIT, my doctorate was in theory of computation. I was kind of always of the mindset that there are just some problems that you’re not going to be able to solve with one computer. I wouldn’t say quantum computing removes, but [it] reduces that limitation — that restriction — and it makes it possible to solve problems that couldn’t otherwise be solved.

But more than that, the collection of technologies that we have to use to build our system, even that is exciting. As I mentioned during the panel, we develop new superconducting circuit fabrication recipes. And we’re kind of always advancing the state of the art there. We’re advancing the state of the art and refrigeration and how to remove contaminants from refrigerators because it can take six weeks to calibrate a quantum computer. Once you cool it down, would you cool the chip down? Well, if you get contaminants in the refrigerator and you have to warm up the system and remove those contaminants, you’re going to lose two months of compute time. We have systems that run for two and a half, three years and nobody else really has the technology to do that.

It’s been said that one of the main concerns with quantum computers is increased costs. Can you talk a little more about that?

Baratz: Well, there was that power question [from the panel] about when is it going to stop all the power? So, the truth of the matter is our system runs on about 20 kilowatts. The reason is, the only thing we really need power for is the refrigerator — and we can put many chips in one refrigerator. So, as the size of the chip grows, as the number of chips grow, the power doesn’t grow. Power is the refrigerator.

Second, the systems are pricey if you want to buy one [but through] cloud access, anybody can get it. I mean, we even give free time, but we do sell it and currently we sell it at $2,000 an hour, but you can run many, many problems within that amount of time.

Will this be the first technology that CIOs will never have on premises?

Baratz: Oh, never say never. We continue to work on shrinking the size of the system as well so, who knows?

Go to Original Article

Hybrid, cost management cloud trends to continue in 2020

If the dawn of cloud computing can be pegged to AWS’ 2006 launch of EC2, then the market has entered its gangly teenage years as the new decade looms.

While the metaphor isn’t perfect, some direct parallels can be seen in the past year’s cloud trends.

For one, there’s the question of identity. In 2019, public cloud providers extended services back into customers’ on-premises environments and developed services meant to accommodate legacy workloads, rather than emphasize transformation. 

Maturity remains a hurdle for the cloud computing market, particularly in the area of cost management and optimization. Some progress occurred on this front in 2019, but there’s much more work to be done by both vendors and enterprises.

Experimentation was another hallmark of 2019 cloud computing trends, with the continued move toward containerized workloads and serverless computing. Here’s a look back at some of these cloud trends, as well as a peek ahead at what’s to come in 2020.

Hybrid cloud evolves

Hybrid cloud has been one of the more prominent cloud trends for a few years, but 2019 saw key changes in how it is marketed and sold.

Companies such as Dell EMC, Hewlett Packard Enterprise and, to a lesser extent, IBM have scuttled or scaled back their public cloud efforts and shifted to cloud services and hardware sales. This trend has roots prior to 2019, but the changes took greater hold this year.

Holger MuellerHolger Mueller

Today, “there’s a battle between the cloud-haves and cloud have-nots,” said Holger Mueller, an analyst with Constellation Research in Cupertino, Calif.

Google, as the third-place competitor in public cloud, needs to attract more workloads. Its Anthos platform for hybrid and multi-cloud container orchestration projects openness but still ties customers into a proprietary system.

In November, Microsoft introduced Azure Arc, which extends Azure management tools to on-premises and cloud platforms beyond Azure, although the latter functionality is limited for now.

Earlier this month, AWS made the long-expected general availability of Outposts, a managed service that puts AWS-built server racks loaded with AWS software inside customer data centers to address issues such as low-latency and data residency requirements.

It’s similar in ways to Azure Stack, which Microsoft launched in 2017, but one key difference is that partners supply Azure Stack hardware. In contrast, Outposts has made AWS a hardware vendor and thus a threat to Dell/EMC, HPE and others who are after customers’ remaining on-premises IT budgets, Mueller said.

But AWS needs to prove itself capable of managing infrastructure inside customer data centers, with which those rivals have plenty of experience.

Looking ahead to 2020, one big question is whether AWS will join its smaller rivals by embracing multi-cloud. Based on the paucity of mentions of that term at re:Invent this year, and the walled-garden approach embodied by Outposts, the odds don’t look favorable.

Bare-metal options grow

Thirteen years ago, AWS launched its Elastic Compute Cloud (EC2) service with a straightforward proposition: Customers could buy VM-based compute capacity on demand. That remains a core offering of EC2 and its rivals, although the number of instance types has grown exponentially.

More recently, bare-metal instances have come into vogue. Bare-metal strips out the virtualization layer, giving customers direct access to the underlying hardware. It’s a useful option for workloads that can’t suffer the performance hit VMs carry and avoids the “noisy neighbor” problem that crops up with shared infrastructure.

Google rolled out managed bare-metal instances in November, following AWS, Microsoft, IBM and Oracle. Smaller providers such as CenturyLink and Packet also offer bare-metal instances. The segment overall is poised for significant growth, reaching more than $26 billion by 2025, according to one estimate.

Multiple factors will drive this growth, according to Deepak Mohan, an analyst with IDC.

Two of the biggest influences in IaaS today are enterprise workload movement into public cloud environments and cloud expansions into customers’ on-premises data centers, evidenced by Outposts, Azure Arc and the like, Mohan said.

The first trend has compelled cloud providers to support more traditional enterprise workloads, such as applications that don’t take well to virtualization or that are difficult to refactor for the cloud. Bare metal gets around this issue.

“As enterprise adoption expands, we expect bare metal to play an increasingly critical role as the primary landing zone for enterprise workloads as they transition into cloud,” Mohan said.

Cloud cost management gains focus

The year saw a wealth of activity around controlling cloud costs, whether through native tools or third-party applications. Among the more notable moves was Microsoft’s extension of Azure Cost Management to AWS, with support for Google Cloud expected next year.

But the standout development was AWS’ November launch of Savings Plans, which was seen as a vast improvement over its longstanding Reserved Instances offering.

Reserved Instances give big discounts to companies that are willing to make upfront spending commitments but have been criticized for inflexibility and a complex set of options.

Owen RogersOwen Rogers

“Savings Plans have massively reduced the complexity in gaining such discounts, by allowing companies to make commitments to AWS without having to be too prescriptive on the application’s specific requirements,” said Owen Rogers, who heads the digital economics unit at 451 Research. “We think this will appeal to enterprises and will eventually replace reserved instances as AWS’ de facto committed pricing model.”

The new year will see enterprises increasingly seek to optimize their costs, not just manage and report on them, and Savings Plans fit into this expectation, Rogers added.

If your enterprise hasn’t gotten serious about cloud cost management, doing so would be a good New Year’s resolution. There’s a general prescription for success in doing so, according to Corey Quinn, cloud economist at the Duckbill Group.

“Understand the goals you’re going after,” Quinn said. “What are the drivers behind your business?” Break down cloud bills into what they mean on a division, department and team-level basis. It’s also wise to start with the big numbers, Quinn said. “You need to understand that line item that makes up 40% of your bill.”

While some companies try to make cloud cost savings the job of many people across finance and IT, in most cases the responsibility shouldn’t fall on engineers, Quinn added. “You want engineers to focus on whether they can build a thing, and then cost-optimize it,” he said.

Serverless vs. containers debate mounts

One topic that could come with more frequency in 2020 is the debate over the relative merits of serverless computing versus containers.

Serverless advocates such as Tim Wagner, inventor of AWS Lambda, contend that a movement is underfoot.

At re:Invent, the serverless features AWS launched were not “coolness for the already-drank-the-Kool-Aid crowd,” Wagner said in a recent Medium post. “This time, AWS is trying hard to win container users over to serverless. It’s the dawn of a new ‘hybrid’ era.”

Another serverless expert hailed Wagner’s stance.

“I think the container trend, at its most mature state, will resemble the serverless world in all but execution duration,” said Ryan Marsh, a DevOps trainer with TheStack.io in Houston.

Anything that allows companies to maintain the feeling of isolated and independent deployable components … is going to see adoption.
Ryan MarshDevOps trainer, TheStack.io

The containers vs. serverless debate has raged for at least a couple of years, and the notion that neither approach can effectively answer every problem persists. But observers such as Wagner and Marsh believe that advances in serverless tooling will shift the discussion.

AWS Fargate for EKS (Elastic Kubernetes Service) became available at re:Invent. The offering provides a serverless framework that launches, scales and manages Kubernetes container clusters on AWS. Earlier this year, Google released a similar service called Cloud Run.

The services will likely gain popularity as customers deeply invested in containers see the light, Marsh said.

“I turned down too many clients last year that had container orchestration problems. That’s frankly a self-inflicted and uninteresting problem to solve in the era of serverless,” he said.

Containers’ allure is understandable. “As a logical and deployable construct, the simplicity is sexy,” Marsh said. “In practice, it is much more complicated.”

“Anything that allows companies to maintain the feeling of isolated and independent deployable components — mimicking our warm soft familiar blankie of a VM — with containers, but removes the headache, is going to see adoption,” he added.

Go to Original Article

Pivot3, Scale Computing HCI appliances zoom in on AI, edge

Hyper-converged vendors Pivot3 and Scale Computing this week expanded their use cases with product launches.

Scale formally unveiled HE150 all-flash NVMe hyper-converged infrastructure (HCI) appliances for space-constrained edge environments. Scale sells the compute device as a three-node cluster, but it does not require a server rack.

The new device is a tiny version of the Scale HE500 HCI appliances that launched this year. HE150 measures 4.6 inches wide, 1.7 inches high and 4.4 inches deep. Scale said select customers have deployed proofs of concept.

Pivot3 rolled out AI-enabled data protection in its Acuity HCI operating software. The vendor said Pivot3 appliances can stream telemetry data from customer deployments to the vendor’s support cloud for historical analysis and troubleshooting.

HCI use cases evolve

Hyper-converged infrastructure vendors package the disparate elements of converged infrastructure in a single piece of hardware, including compute, hypervisor software, networking and storage.

Dell is the HCI market leader, in large measure to VMware vSAN, while HCI pioneer Nutanix holds the No. 2 spot. But competition is heating up. Server vendors Cisco and Hewlett Packard Enterprise have HCI products, as does NetApp with a product using its SolidFire all-flash technology. Ctera Networks, DataCore and startup Datrium are also trying to elbow into the crowded space.

Pivot3 storage is used mostly for video surveillance, although the Austin, Texas-based vendor has focused on increasing its deal size for its Acuity systems.

Scale Computing, based in Indianapolis, sells the HC3 virtualization platform for use in edge and remote office deployments. The company has customers in education, financial services, government, healthcare and retail.

Hyper-converged infrastructure has expanded beyond its origins in virtual desktop infrastructure to support cloud analytics of primary and secondary storage, said Eric Sheppard, a research vice president in IDC’s infrastructure systems, platforms and technologies group.

“The most common use of HCI is virtualized applications, but the percentage of [hosted] apps that are mission-critical has increased considerably,” Sheppard said.

Scale HE150: Small gear for the edge

Scale’s HC3 system is designed for Linux-based KVM. Unlike most HCI appliances, Scale HC3 does not support VMware. Scale designed HyperCore to run Linux-based KVM.

Scale Computing's mini-HCI appliance
Scale Computing HE150 HCI appliance

The HE150 includes a full version of HyperCore operating system, including rolling updates, replication and snapshots. The device comes with up to six cores and up to 64 GB of RAM. Intel’s Frost Canyon Next Unit of Computing (NUC) mini-PC provides the compute. Storage per nodes is up to 2 TB with one M.2 NVMe SSD.

Traditional HCI appliances require a dedicated backplane switch to route network traffic, including Scale’s larger HC3 appliances. HE150 features new HC3 Edge Fabric software-based tunneling for communication between HC3 nodes. The tunneling is needed to accommodate the tiny form factor, said Dave Demlow, Scale’s VP of product management.

Scale recommends a three-node HE150 cluster. Data is mirrored twice between the nodes for redundancy. Demlow said the cluster takes up the space of three smart phones stacked together.

Eric Slack, a senior analyst at Evaluator Group, said Scale’s operating system enables it to sell an HCI appliance the size of Scale HE150.

“This new small device runs the full Scale HyperCore OS, which is an important feature. Scale stack is pretty thin. They don’t run VMware or a separate software-defined storage layer, so HyperCore can run with limited memory and a limited number of CPU cores,” Slack said.

Pivot3 HCI appliances

Pivot3 did not make hardware upgrades with this release. The features in Acuity center on AI-driven analytics for more automated management.

Pivot3 enhanced its Intelligence Engine policy manager with AI tools for backup and disaster recovery in multi-petabyte storage. The move comes amid research by IDC that indicates more enterprises expect HCI vendors to provide autonomous management via the cloud.

The IDC survey of 252 data centers found that 89% rely on cloud-based predictive analytics to manage IT infrastructure, but only 72% had enterprise storage systems that bundle analytics tools as part of the base price.

“The entirety of the data center infrastructure market is increasing the degree to which tasks can be automated. All roads lead toward autonomous operations, and cloud-based predictive analytics is the fastest way to get there,” Sheppard said.

Pivot3 said it added self-healing to identify failed nodes and automatically returns repaired nodes to the cluster. The vendor also added delta differencing to its erasure coding for faster rebuilds.

Go to Original Article

What admins need to know about Azure Stack HCI

Despite all the promise of cloud computing, it remains out of reach for administrators who cannot, for different reasons, migrate out of the data center.

Many organizations still grapple with concerns, such as compliance and security, that weigh down any aspirations to move workloads from on-premises environments. For these organizations, hyper-converged infrastructure (HCI) products have stepped in to approximate some of the perks of the cloud, including scalability and high availability. In early 2019, Microsoft stepped into this market with Azure Stack HCI. While it was a new name, it was not an entirely new concept for the company.

Some might see Azure Stack HCI as a mere rebranding of the existing Windows Server Software-Defined (WSSD) program, but there are some key differences that warrant further investigation from shops that might benefit from a system that integrates with the latest software-defined features in the Windows Server OS.

What distinguishes Azure Stack HCI from Azure Stack?

When Microsoft introduced its Azure Stack HCI program in March 2019, there was some initial confusion from many in IT. The company offered a similarly named product called Azure Stack, which uses the name of Microsoft’s cloud platform, to run a version of Azure inside the data center.

Microsoft developed Azure Stack HCI for local VM workloads that run on Windows Server 2019 Datacenter edition. While not explicitly tied to the Azure cloud, organizations that use Azure Stack HCI can connect to Azure for hybrid services, such as Azure Backup and Azure Site Recovery.

Azure Stack HCI offerings use OEM hardware from vendors such as Dell, Fujitsu, Hewlett Packard Enterprise and Lenovo that is validated by Microsoft to capably run the range of software-defined features in Windows Server 2019.

How is Azure Stack HCI different from the WSSD program?

While Azure Stack is essentially an on-premises version of the Microsoft cloud computing platform, its approximate namesake, Azure Stack HCI, is more closely related to the WSSD program that Microsoft launched in 2017.

Microsoft made its initial foray into the HCI space with its WSSD program, which utilized the software-defined features in the Windows Server 2016 Datacenter edition on hardware validated by Microsoft.

For Azure Stack HCI, Microsoft uses the Windows Server 2019 Datacenter edition as the foundation of this product with updated software-defined functionality compared to Windows Server 2016.

Windows Server gives administrators the virtualization layers necessary to avoid the management and deployment issues related to proprietary hardware. Windows Server’s software-defined storage, networking and compute capabilities enable organizations to more efficiently pool the hardware resources and use centralized management to sidestep traditional operational drawbacks.

For Azure Stack HCI, Microsoft uses the Windows Server 2019 Datacenter edition as the foundation of this product with updated software-defined functionality compared to Windows Server 2016. For example, Windows Server 2019 offers expanded pooled storage of 4 petabytes in Storage Spaces Direct, compared to 1 PB on Windows Server 2016. Microsoft also updated the clustering feature in Windows Server 2019 for improved workload resiliency and added data deduplication to give an average of 10 times more storage capacity than Windows Server 2016.

What are the deployment and management options?

The Azure Stack HCI product requires the use of the Windows Server 2019 Datacenter edition, which the organization might get from the hardware vendor for a lower cost than purchasing it separately.

To manage the Azure Stack HCI system, Microsoft recommends using Windows Admin Center, a relatively new GUI tool developed as the potential successor to Remote Server Administration Tools, Microsoft Management Console and Server Manager. Microsoft tailored Windows Admin Center for smaller deployments, such as Azure Stack HCI.

Windows Admin Center drive dashboard
The Windows Admin Center server management tool offers a dashboard to check on the drive performance for issues related to latency or when a drive fails.

Windows Admin Center encapsulates a number of traditional server management utilities for routine tasks, such as registry edits, but it also handles more advanced functions, such as the deployment and management of Azure services, including Azure Network Adapter for companies that want to set up encryption for data transmitted between offices.

Companies that purchase an Azure Stack HCI system get Windows Server 2019 for its virtualization technology that pools storage and compute resources from two nodes up to 16 nodes to run VMs on Hyper-V. Microsoft positions Azure Stack HCI as an ideal system for multiple scenarios, such as remote office/branch office and VDI, and for use with data-intensive applications, such as Microsoft SQL Server.

How much does it cost to use Azure Stack HCI?

The Microsoft Azure Stack HCI catalog features more than 150 models from 20 vendors. A general-purpose node will cost about $10,000, but the final price will vary depending on the level of customization the buyer wants.

There are multiple server configuration options that cover a range of processor models, storage types and networking. For example, some nodes have ports with 1 Gigabit Ethernet, 10 GbE, 25 GbE and 100 GbE, while other nodes support a combination of 25 GbE and 10 GbE ports. Appliances optimized for better performance that use all-flash storage will cost more than units with slower, traditional spinning disks.

On top of the price of the hardware is the annual maintenance and support fees that are typically a percentage of the purchase price of the appliance.

If a company opts to tap into the Azure cloud for certain services, such as Azure Monitor to assist with operational duties by analyzing data from applications to determine if a problem is about to occur, then additional fees will come into play. Organizations that remain fixed with on-premises use for their Azure Stack HCI system will avoid these extra costs.

Go to Original Article

AWS moves into quantum computing services with Braket

Amazon debuted a preview version of its quantum computing services this week, along with a new quantum computing research center and lab where AWS cloud users can work with quantum experts to identify practical, short-term applications.

The new AWS quantum computing managed service, called Amazon Braket, is aimed initially at scientists, researchers and developers, giving them access to quantum systems provided by IonQ, D-Wave and Rigetti.

Amazon’s quantum computing services news comes less than a month after Microsoft disclosed it is developing a chip capable of running quantum software. Microsoft also previewed a version of its Azure Quantum Service and struck partnerships with IonQ and Honeywell to help deliver the Azure Quantum Service.

In November, IBM said its Qiskit QC development framework supports IonQ’s ion trap technology, used by IonQ and Alpine Quantum Technologies.

Google recently claimed it was the first quantum vendor to achieve quantum supremacy — the ability to solve complex problems that classical systems either can’t solve or would take them an extremely long time to solve. Company officials said it represented an important milestone.

In that particular instance, Google’s Sycamore processor solved a difficult problem in just 200 seconds — a problem that would take a classical computer 10,000 years to solve. The claim was met with a healthy amount of skepticism by some competitors and other more objective sources as well. Most said they would reserve judgement on the results until they could take a closer look at the methodology involved.

Cloud services move quantum computing forward

Peter Chapman, CEO and president of IonQ, doesn’t foresee any conflicts with his respective agreements with rivals Microsoft and AWS. AWS jumping into the fray with Microsoft and IBM will help push quantum computing closer to the limelight and make users more aware of the technology’s possibilities, he said.

“There’s no question AWS’s announcements give greater visibility to what’s going on with quantum computing,” Chapman said. “Over the near term they are looking at hybrid solutions, meaning they will mix quantum and classical algorithms making [quantum development software] easier to work with,” he said.

There’s no question AWS’s announcements will give greater visibility to what’s going on with quantum computing.
Peter ChapmanCEO and president, IonQ

Microsoft and AWS are at different stages of development, making it difficult to gauge which company has advantages over the other. But what Chapman does like about AWS right now is the set of APIs that allows a developer’s application to run across the different quantum architectures of IonQ (ion trap), D-Wave (quantum annealing) and Rigetti (superconducting chips).

“At the end of the day it’s not how many qubits your system has,” Chapman said. “If your application doesn’t run on everyone’s hardware, users will be disappointed. That’s what is most important.”

Another analyst agreed that the sooner quantum algorithms can be melded with classical algorithms to produce something useful in an existing corporate IT environment, the faster quantum computing will be accepted.

“If you have to be a quantum expert to produce anything meaningful, then whatever you do produce stays in the labs,” said Frank Dzubeck, president of Communications Network Architects, Inc. “Once you integrate it with the classical world and can use it as an adjunct for what you are doing right now, that’s when [quantum technology] grows like crazy.”

Microsoft’s Quantum Development Kit, which the company open sourced earlier this year, also allows developers to create applications that operate across a range of different quantum architectures. Like AWS, Microsoft plans to combine quantum and classical algorithms to produce applications and services aimed at the scientific markets and ones that work on existing servers.

One advantage AWS and Microsoft provide for smaller quantum computing companies like IonQ, according to Chapman, is not just access to their mammoth user bases, but support for things like billing.

“If customers want to run something on our computers, they can just go to their dashboard and charge it to their AWS account,” Chapman said. “They don’t need to set up an account with us. We also don’t have to spend tons of time on the sales side convincing Fortune 1000 users to make us an approved vendor. Between the two of them [Microsoft and AWS], they have the whole world signed up as approved vendors,” he said.

The mission of the AWS Center for Quantum Computing will be to solve longer-term technical problems using quantum computers. Company officials said they have users ready to begin experimenting with the newly minted Amazon Braket but did not identify any users by name.

The closest they came was a prepared statement by Charles Toups, vice president and general manager of Boeing’s Disruptive Computing and Networks group. The company is investigating how quantum computing, sensing and networking technologies can enhance Boeing products and services for its customers, according to the statement.

“Quantum engineering is starting to make more meaningful progress and users are now asking for ways to experiment and explore the technology’s potential,” said Charlie Bell, senior vice president with AWS’s Utility Computing Services group.

AWS’s assumption going forward is quantum computing will be a cloud-first technology, which will be the way AWS will provide its users with their first quantum experience via Amazon Braket and the Quantum Solutions Lab.

Corporate and third-party developers can create their own customized algorithms with Braket, which gives them the option of executing either low-level quantum circuits or fully-managed hybrid algorithms. This makes it easier to choose between software simulators and whatever quantum hardware they select.

The AWS Center for Quantum Computing is based at Caltech, which has long invested in both experimental and theoretical quantum science and technology.

Go to Original Article

New Azure HPC and partner offerings at Supercomputing 19

For more than three decades, the researchers and practitioners that make up the high-performance computing (HPC) community will come together for their annual event. More than ten thousand strong in attendance, the global community will converge on Denver, Colorado to advance the state-of-the-art in HPC. The theme for Supercomputing ‘19 is “HPC is now” – a theme that resonates strongly with the Azure HPC team given the breakthrough capabilities we’ve been working to deliver to customers.

Azure is upending preconceptions of what the cloud can do for HPC by delivering supercomputer-grade technologies, innovative services, and performance that rivals or exceeds some of the most optimized on-premises deployments. We’re working to ensure Azure is paving the way for a new era of innovation, research, and productivity.

At the show, we’ll showcase Azure HPC and partner solutions, benchmarking white papers and customer case studies – here’s an overview of what we’re delivering.

  • Massive Scale MPI – Solve problems at the limits of your imagination, not the limits of other public cloud’s commodity network. Azure supports your tightly coupled workloads up to 80,000 cores per job featuring the latest HPC-grade CPUs and ultra-low latency InfiniBand hDR networking.
  • Accelerated Compute – Choose from the latest GPUs, field programmable gate array (FPGAs), and now IPUs for maximum performance and flexibility across your HPC, AI, and visualization workloads.
  • Apps and Services – Leverage advanced software and storage for every scenario: from hybrid cloud to cloud migration, from POCs to production, from optimized persistent deployments to agile environment reconfiguration. Azure HPC software has you covered.

Azure HPC unveils new offerings

  • The preview of new second gen AMD EPYC based HBv2 Azure Virtual Machines for HPC – HBv2 virtual machines (VMs) deliver supercomputer-class performance, message passing interface (MPI) scalability, and cost efficiency for a variety of real-world HPC workloads. HBv2 VMs feature 120 non-hyperthreaded CPU cores from the new AMD EPYC™ 7742 processor, up to 45 percent more memory bandwidth than x86 alternatives, and up to 4 teraFLOPS of double-precision compute. Leveraging the cloud’s first deployment of 200 Gigabit HDR InfiniBand from Mellanox, HBv2 VMs support up to 80,000 cores per MPI job to deliver workload scalability and performance that rivals some of the world’s largest and most powerful bare metal supercomputers. HBv2 is not just one of the most powerful HPC servers Azure has ever made, but also one of the most affordable. HBv2 VMs are available now.
  • The preview of new NVv4 Azure Virtual Machines for virtual desktops – NVv4 VMs enhance Azure’s portfolio of Windows Virtual Desktops with the introduction of the new AMD EPYC™ 7742 processor and the AMD RADEON INSTINCT™ MI25 GPUs. NVv4 gives customers more choice and flexibility by offering partitionable GPUs. Customers can select a virtual GPU size that is right for their workloads and price points, with as little as 2GB of dedicated GPU frame buffer for an entry-level desktop in the cloud, up to an entire MI25 GPU with 16GB of HBM2 memory to provide powerful workstation-class experience. NVv4 VMs are available now in preview.
  • The preview NDv2 Azure GPU Virtual Machines – NDv2 VMs are the latest, fastest, and most powerful addition to Azure GPU family, and are designed specifically for the most demanding distributed HPC, artificial intelligence (AI), and machine learning (ML) workloads. These VMs feature 8 NVIDIA Tesla V100 NVLink interconnected GPUs each with 32 GB of HBM2 memory, 40 non-hyperthreaded cores from the Intel Xeon Platinum 8168 processor, and 672 GiB of system memory. NDv2-series virtual machines also now feature 100 Gigabit EDR InfiniBand from Mellanox with support for standard OFED drivers and all MPI types and versions. NDv2-series virtual machines are ready for the most demanding machine learning models and distributed AI training workloads with out-of-box NCCL2 support for InfiniBand- allowing for easy scale-up to supercomputer-sized clusters that can run workloads utilizing CUDA, including popular ML tools and  frameworks. NDv2 VMs are available now in preview.
  • Azure HPC Cache now available – When it comes to file performance, the new Azure HPC Cache delivers flexibility and scale. Use this service right from the Azure Portal to connect high-performance computing workloads to on-premises network-attached storage or run Azure Blob as the portable operating system interface.
  • The preview of new NDv3 Azure Virtual Machines for AI –The preview of NDv3 VMs are Azure’s first offering featuring the Graphcore IPU, designed from the ground up for AI training and inference workloads. The IPU novel architecture enables high-throughput processing of neural networks even at small batch sizes, which accelerates training convergence and enables short inference latency. With the launch of NDv3, Azure is bringing the latest in AI silicon innovation to the public cloud and giving customers additional choice in how they develop and run their massive scale AI training and inference workloads. NDv3 VMs feature 16 IPU chips, each with over 1200 cores and over 7000 independent threads and a large 300MB on-chip memory that delivers up to 45 TB/s of memory bandwidth. The 8 Graphcore accelerator cards are connected through a high-bandwidth, low-latency interconnect enabling large models to be trained in a model-parallel (including pipelining) or data parallel way. An NDv3 VM also includes 40 cores of CPU, backed by Intel Xeon Platinum 8168 processors and 768 GB of memory. Customers can develop applications for IPU technology using Graphcore’s Poplar® software development toolkit, and leverage the IPU-compatible versions of popular machine learning frameworks. NDv3 VMs are now available in preview.
  • NP Azure Virtual Machines for HPC coming soon – Our Alveo U250 FPGA-Accelerated VMs offer from 1-to-4 Xilinx U250 FPGA devices as an Azure VM- backed by powerful Xeon Platinum CPU cores, and fast NVMe-based storage. The NP series will enable true lift-and-shift and single-target development of FPGA applications for a general purpose cloud. Based on a board and software ecosystem customers can buy today, RTL and high-level language designs targeted at Xilinx’s U250 card and SDAccel 2019.1 runtime will run on Azure VMs just as they do on-premises and on the edge, enabling the bleeding edge of accelerator development to harness the power of the cloud without additional development costs.

  • Azure CycleCloud 7.9 Update We are excited to announce the release of Azure CycleCloud 7.9. Version 7.9 focuses on improved operational clarity and control, in particular for large MPI workloads on Azure’s unique InfiniBand interconnected infrastructure. Among many other improvements, this release includes:

    • Improved error detection and reporting user interface (UI) that greatly simplify diagnosing VM issues.

    • ​Node time-to-live capabilities via a “Keep-alive” function, allowing users to build and debug MPI applications that are not affected by autoscaling policies.

    • VM placement group management through the UI that provides users direct control into node topology for latency sensitive applications.

    • Support for Ephemeral OS disks, which improve virtual machines and virtual machines scale sets start-up performance and cost.

  • Microsoft HPC Pack 2016, Update 3 – released in August, Update 3 includes significant performance and reliability improvements, support for Windows Server 2019 in Azure, a new VM extension for deploying Azure IaaS Windows nodes, and many other features, fixes, and improvements.

In all of our new offerings and alongside our partners, Azure HPC aims to consistently offer the latest in capabilities for HPC oriented use cases. Together with our partners Intel, AMD, Mellanox, NVIDIA, Graphcore, Xilinx, and many more, we look forward to seeing you next week in Denver!

Go to Original Article
Author: Microsoft News Center

Microsoft’s new approach to hybrid: Azure services when and where customers need them | Innovation Stories

As business computing needs have grown more complex and sophisticated, many enterprises have discovered they need multiple systems to meet various requirements – a mix of technology environments in multiple locations, known as hybrid IT or hybrid cloud.

Technology vendors have responded with an array of services and platforms – public clouds, private clouds and the growing edge computing model – but there hasn’t necessarily been a cohesive strategy to get them to work together.

We got here in an ad hoc fashion,” said Erik Vogel, global vice president for customer experience for HPE GreenLake at Hewlett Packard Enterprise. Customers didn’t have a strategic model to work from.

Instead, he said, various business owners in the same company may have bought different software as a service (SaaS) applications, or developers may have independently started leveraging Amazon Web Services, Azure or Google Cloud Platform to develop a set of applications.

At its Ignite conference this week in Orlando, Florida, Microsoft announced its solution to such cloud sprawl. The company has launched a preview of Azure Arc, which offers Azure services and management to customers on other clouds or infrastructure, including those offered by Amazon and Google.

John JG Chirapurath, general manager for Azure data, blockchain and artificial intelligence at Microsoft, said the new service is both an acknowledgement of, and a response to, the reality that many companies face today. They are running various parts of their businesses on different cloud platforms, and they also have a lot of data stored on their own new or legacy systems.

In all those cases, he said, these customers are telling Microsoft they could use the benefits of Azure cloud innovation whether or not their data is stored in the cloud, and they could benefit from having the same Azure capabilities – including security safeguards – available to them across their entire portfolio.

We are offering our customers the ability to take their services, untethered from Azure, and run them inside their own datacenter or in another cloud,” Chirapurath said.

Microsoft says Azure Arc builds on years of work the company has done to serve hybrid cloud needs. For example, Azure Resource Manager, released in 2014, was created with the vision that it would manage resources outside of Azure, including in companies’ internal servers and on other clouds.

That flexibility can help customers operate their services on a mix of clouds more efficiently, without purchasing new hardware or switching among cloud providers. Companies can use a public cloud to obtain computing power and data storage from an outside vendor, but they can also house critical applications and sensitive data on their own premises in a private cloud or server.

Then there’s edge computing, which stores data where the user is, in between the company and the public cloud for example, on their customers’ mobile devices or on sensors in smart buildings like hospitals and factories.

YouTube Video

That’s compelling for companies that need to run AI models on systems that aren’t reliably connected to the cloud, or to make computations more quickly than if they had to send large amounts of data to and from the cloud. But it also must work with companies’ cloud-based, internet-connected systems.

“A customer at the edge doesn’t want to use different app models for different environments,” said Mark Russinovich, Azure chief technology officer. “They need apps that span cloud and edge, leveraging the same code and same management constructs.”

Streamlining and standardizing a customer’s IT structure gives developers more time to build applications that produce value for the business instead of managing multiple operating models. And enabling Azure to integrate administrative and compliance needs across the enterprise – automating system updates and security enhancements brings additional savings in time and money.

“You begin to free up people to go work on other projects, which means faster development time, faster time to market,” said HPE’s Vogel. HPE is working with Microsoft on offerings that will complement Azure Arc.

Arpan Shah, general manager of Azure infrastructure, said Azure Arc allows companies to use Azure’s governance tools for their virtual machines, Kubernetes clusters and data across different locations, helping ensure companywide compliance on things like regulations, security, spending policies and auditing tools.

Azure Arc is underpinned in part by Microsoft’s commitment to technologies that customers are using today, including virtual machines, containers and Kubernetes, an open source system for organizing and managing containers. That makes clusters of applications easily portable across a hybrid IT environment – to the cloud, the edge or an internal server.

“It’s easy for a customer to put that container anywhere,” Chirapurath said. “Today, you can keep it here. Tomorrow, you can move it somewhere else.”

Microsoft says these latest Azure updates reflect an ongoing effort to better understand the complex needs of customers trying to manage their Linux and Windows servers, Kubernetes clusters and data across environments.

“This is just the latest wave of this sort of innovation,” Chirapurath said. “We’re really thinking much more expansively about customer needs and meeting them according to how they’d like to run their applications and services.”

Top image: Erik Vogel, global vice president for customer experience for HPE GreenLake at Hewlett Packard Enterprise, with a prototype of memory-driven computing. HPE is working with Microsoft on offerings that will complement Azure Arc. Photo by John Brecher for Microsoft.


Go to Original Article
Author: Microsoft News Center