Tag Archives: computing

Quantum computing strides continue with IBM, Q Network growth

IBM kept its quantum computing drumbeat going at the Consumer Electronics Show this week with news that it has more than doubled the number of IBM Q Network users over the past year and has signed with Daimler AG to jointly develop the next generation of rechargeable automotive batteries using quantum computers.

Some of the latest additions to the IBM Q Network include Delta Airlines, Goldman Sachs and the Los Alamos National Laboratory. The multiyear deal IBM signed with Delta is the first agreement involving quantum computing in the airline industry, officials from each company said. The two companies will explore developing practical applications to solve problems corporate IT shops and their respective users routinely face every day.

Delta joined the IBM Q Network through one of IBM’s Quantum Computing Hub organizations — in this case, North Carolina State University — where IBM can work more closely with not just user organizations in that region, but academic institutions as well. IBM believes Delta can also make meaningful contributions toward improving quantum computing skills as well as generally build a greater sense of community among a diverse set of organizations.

“Delta can work more closely with key professors and the academic research arm of N.C. State to improve their ability to teach students on a number of quantum technologies,” said Jamie Thomas, general manager overseeing the strategy and development of IBM’s Systems unit. “[Delta] can also offer up experts to many of the regional organizations in the southeast [United States] and collaborate with Research Triangle Park on a number of projects.”

Jamie ThomasJamie Thomas

Similarly, the Oak Ridge National Laboratory serves as a quantum computing hub working with other national labs as well as with academic institutions, including the Georgia Institute of Technology.

“You can see the relationships and ecosystems building (through the network of quantum hubs), which is important because they are all working to solve concrete problems that is necessary to increase the general maturation of quantum computing in regions across the country,” Thomas said.

IBM doubles Quantum Volume

In other quantum computing-related news, IBM officials said they have achieved another scientific milestone with its highest Quantum Volume to date of 32, doubling the previous high of 16. Company officials believe the Quantum Volume metric is a truer measurement of performance because it takes into consideration more than just the raw speed of its quantum computers. According to Thomas, it is a major step along the way to accomplishing Quantum Advantage, which is the ability to solve complex problems that are beyond the abilities of classical systems.

“This milestone is not only important to us because it edges us closer to Quantum Advantage, but because it means we have kept our commitment to double Quantum Volume on an annual basis,” Thomas said.

I think IBM got a bit upset over those recent claims by Google of achieving Quantum Advantage and so they are taking this opportunity [at CES] to reinforce their point about Quantum Volume.
Frank DzubeckPresident, Communications Network Architects

What helped IBM achieve its goal was the introduction of its 53-qubit quantum system last fall, in concert with improved qubit connectivity, a better coherence rate and enhanced air mitigation capabilities, all of which are essential measurements in raising the Quantum Volume number, Thomas said.

“Underneath these metrics is also things like improving the ability to manage these systems, increasing their resiliency and how all the electronics interact with the processor itself,” she said.

One analyst also believes that Quantum Volume is a more practical measure of a quantum system’s power, saying that too many vendors of quantum systems are focused on their machines’ speeds and feeds — similar to the performance battles waged among competing server vendors 10 and 20 years ago. He added this isn’t a practical metric given the nature of quantum science compared to classical architectures.

“Companies such as Google are focusing on Quantum Advantage from a speeds-and-feeds perspective, but that can be just a PR game,” said Frank Dzubeck, president of Communications Network Architects Inc. “I think IBM got a bit upset over those recent claims by Google of achieving Quantum Advantage and so they are taking this opportunity [at CES] to reinforce their point about Quantum Volume,” he said.

Quantum revs car batteries

IBM has also begun working jointly with Daimler to develop the next generation of automotive batteries. The two companies said they are using quantum computers to simulate the chemical makeup of lithium-sulfur batteries, which they claim offers significantly higher energy densities than the current lithium-ion batteries. Officials from both companies said their goal is to design a next-generation rechargeable battery from the ground up.

“The whole battery market is hot across all industries, particularly among automobiles,” Thomas said. “The key is finding different paths to create a battery that maintains its energy for longer periods of time and is more cost-effective for the masses.

Another benefit to using lithium-sulfur batteries is it eliminates the need for cobalt, a material that is largely found in the Democratic Republic of the Congo, formerly known as the Belgian Congo. Because that country and the immediately surrounding territories are often war torn, supply of the materials at times can be constrained.

“The important considerations here are with no cobalt necessary, not only will the supply constraints disappear, but it makes these batteries for automobiles and trucks a lot less expensive,” Dzubeck said.

Go to Original Article
Author:

Quantum computing in business applications is coming

Quantum computing may still be in its infancy stages, but it’s rapidly approaching and is already showing significant potential for business applications across several industries.

At MIT Technology Review’s Future Compute conference in December 2019, Alan Baratz, CEO of D-Wave Systems Inc., a Canadian quantum computing company, discussed the benefits of quantum computing in business applications and the new capabilities it can offer.

Editor’s note: The following has been edited for clarity and brevity.

Why should CIOs be thinking about quantum computing in business applications?

Alan Baratz: Quantum computing can accelerate the time to solve the hard problems. If you are in a logistics business and you need to worry about pack and ship or vehicle scheduling; or you’re in aerospace and you need to worry about crew or flight scheduling; or a drug company and you need to worry about molecular discovery computational chemistry, the compute time to solve those problems at scale can be very large.

Typically, what companies do is come up with heuristics — they try to simplify the problem. Well, quantum computing has the potential to allow you to solve the complete problem much faster to get better solutions and optimize your business.

Would you say speed is considered as the most significant factor of quantum computing?

Baratz: Well, sometimes it’s speed, sometimes it’s quality of the solution [or] better solution within the same amount of time. Sometimes, it’s diversity of solution. One of the interesting things about the quantum computer is that maybe you don’t necessarily want the optimal solution but [rather] a set of good solutions that you can then use to optimize other things that weren’t originally a part of the problem. The quantum computer is good at giving you a set of solutions that are close to optimal in addition to the optimal solution.

What’s limiting quantum computing in terms of hardware?

Baratz: Well, up until now, in [D-Wave’s] case, it’s been the size and topology of the system because in order to solve your problem, you have to be able to map it onto the quantum processor. Remember, we aren’t gate-based, so it’s not like you specify the sequence of instructions or the gates. In order to solve your problem in our system, you have to take the whole problem and be able to map it into our hardware. That means with 2,000 qubits and each qubit connected to six others, there’s only certain size problems that you can actually map into our system. When we deliver the Advantage system next year, we double that — over 5,000 qubits [and] each qubit connected to 15 others — so, that will allow us to solve significantly larger problems.

In fact, for a doubling of the processor size, you get about a tripling of the problem size. But we’ve done one other thing: We’ve developed brand new hybrid algorithms [and] these are not types of hybrid algorithms that people typically think about. It’s not like you try to take the problem and divide it into chunks and solve [them]. This is a hybrid approach where we use a classical technique to find the core of the problem, and then we send that off to the quantum processor. With that, we can get to the point where we can solve large problems even with our 5,000-qubit system. We think once we deliver the 5,000-qubit system Advantage [and] the new hybrid solver, we’ll see more and more companies being able to solve real production problems.

Do you think companies are in a race toward quantum computing?

Baratz: No, they’re not because in some cases they’re not aware, and in some cases they don’t understand what’s possible. The problem is there are very large companies that make a lot of noise in the quantum space and their approach to quantum computing is one where it will take many, many years before companies can actually do something useful with the system. And because they’re the loudest out there, many companies think that’s all there is. We’ve been doing a better job of getting the D-Wave message out, but we still have a ways to go.

What excites you most about quantum computing in business settings?

Baratz: First, just the ability to solve problems that can’t otherwise be solved to me is exciting. When I was at MIT, my doctorate was in theory of computation. I was kind of always of the mindset that there are just some problems that you’re not going to be able to solve with one computer. I wouldn’t say quantum computing removes, but [it] reduces that limitation — that restriction — and it makes it possible to solve problems that couldn’t otherwise be solved.

But more than that, the collection of technologies that we have to use to build our system, even that is exciting. As I mentioned during the panel, we develop new superconducting circuit fabrication recipes. And we’re kind of always advancing the state of the art there. We’re advancing the state of the art and refrigeration and how to remove contaminants from refrigerators because it can take six weeks to calibrate a quantum computer. Once you cool it down, would you cool the chip down? Well, if you get contaminants in the refrigerator and you have to warm up the system and remove those contaminants, you’re going to lose two months of compute time. We have systems that run for two and a half, three years and nobody else really has the technology to do that.

It’s been said that one of the main concerns with quantum computers is increased costs. Can you talk a little more about that?

Baratz: Well, there was that power question [from the panel] about when is it going to stop all the power? So, the truth of the matter is our system runs on about 20 kilowatts. The reason is, the only thing we really need power for is the refrigerator — and we can put many chips in one refrigerator. So, as the size of the chip grows, as the number of chips grow, the power doesn’t grow. Power is the refrigerator.

Second, the systems are pricey if you want to buy one [but through] cloud access, anybody can get it. I mean, we even give free time, but we do sell it and currently we sell it at $2,000 an hour, but you can run many, many problems within that amount of time.

Will this be the first technology that CIOs will never have on premises?

Baratz: Oh, never say never. We continue to work on shrinking the size of the system as well so, who knows?

Go to Original Article
Author:

Hybrid, cost management cloud trends to continue in 2020

If the dawn of cloud computing can be pegged to AWS’ 2006 launch of EC2, then the market has entered its gangly teenage years as the new decade looms.

While the metaphor isn’t perfect, some direct parallels can be seen in the past year’s cloud trends.

For one, there’s the question of identity. In 2019, public cloud providers extended services back into customers’ on-premises environments and developed services meant to accommodate legacy workloads, rather than emphasize transformation. 

Maturity remains a hurdle for the cloud computing market, particularly in the area of cost management and optimization. Some progress occurred on this front in 2019, but there’s much more work to be done by both vendors and enterprises.

Experimentation was another hallmark of 2019 cloud computing trends, with the continued move toward containerized workloads and serverless computing. Here’s a look back at some of these cloud trends, as well as a peek ahead at what’s to come in 2020.

Hybrid cloud evolves

Hybrid cloud has been one of the more prominent cloud trends for a few years, but 2019 saw key changes in how it is marketed and sold.

Companies such as Dell EMC, Hewlett Packard Enterprise and, to a lesser extent, IBM have scuttled or scaled back their public cloud efforts and shifted to cloud services and hardware sales. This trend has roots prior to 2019, but the changes took greater hold this year.

Holger MuellerHolger Mueller

Today, “there’s a battle between the cloud-haves and cloud have-nots,” said Holger Mueller, an analyst with Constellation Research in Cupertino, Calif.

Google, as the third-place competitor in public cloud, needs to attract more workloads. Its Anthos platform for hybrid and multi-cloud container orchestration projects openness but still ties customers into a proprietary system.

In November, Microsoft introduced Azure Arc, which extends Azure management tools to on-premises and cloud platforms beyond Azure, although the latter functionality is limited for now.

Earlier this month, AWS made the long-expected general availability of Outposts, a managed service that puts AWS-built server racks loaded with AWS software inside customer data centers to address issues such as low-latency and data residency requirements.

It’s similar in ways to Azure Stack, which Microsoft launched in 2017, but one key difference is that partners supply Azure Stack hardware. In contrast, Outposts has made AWS a hardware vendor and thus a threat to Dell/EMC, HPE and others who are after customers’ remaining on-premises IT budgets, Mueller said.

But AWS needs to prove itself capable of managing infrastructure inside customer data centers, with which those rivals have plenty of experience.

Looking ahead to 2020, one big question is whether AWS will join its smaller rivals by embracing multi-cloud. Based on the paucity of mentions of that term at re:Invent this year, and the walled-garden approach embodied by Outposts, the odds don’t look favorable.

Bare-metal options grow

Thirteen years ago, AWS launched its Elastic Compute Cloud (EC2) service with a straightforward proposition: Customers could buy VM-based compute capacity on demand. That remains a core offering of EC2 and its rivals, although the number of instance types has grown exponentially.

More recently, bare-metal instances have come into vogue. Bare-metal strips out the virtualization layer, giving customers direct access to the underlying hardware. It’s a useful option for workloads that can’t suffer the performance hit VMs carry and avoids the “noisy neighbor” problem that crops up with shared infrastructure.

Google rolled out managed bare-metal instances in November, following AWS, Microsoft, IBM and Oracle. Smaller providers such as CenturyLink and Packet also offer bare-metal instances. The segment overall is poised for significant growth, reaching more than $26 billion by 2025, according to one estimate.

Multiple factors will drive this growth, according to Deepak Mohan, an analyst with IDC.

Two of the biggest influences in IaaS today are enterprise workload movement into public cloud environments and cloud expansions into customers’ on-premises data centers, evidenced by Outposts, Azure Arc and the like, Mohan said.

The first trend has compelled cloud providers to support more traditional enterprise workloads, such as applications that don’t take well to virtualization or that are difficult to refactor for the cloud. Bare metal gets around this issue.

“As enterprise adoption expands, we expect bare metal to play an increasingly critical role as the primary landing zone for enterprise workloads as they transition into cloud,” Mohan said.

Cloud cost management gains focus

The year saw a wealth of activity around controlling cloud costs, whether through native tools or third-party applications. Among the more notable moves was Microsoft’s extension of Azure Cost Management to AWS, with support for Google Cloud expected next year.

But the standout development was AWS’ November launch of Savings Plans, which was seen as a vast improvement over its longstanding Reserved Instances offering.

Reserved Instances give big discounts to companies that are willing to make upfront spending commitments but have been criticized for inflexibility and a complex set of options.

Owen RogersOwen Rogers

“Savings Plans have massively reduced the complexity in gaining such discounts, by allowing companies to make commitments to AWS without having to be too prescriptive on the application’s specific requirements,” said Owen Rogers, who heads the digital economics unit at 451 Research. “We think this will appeal to enterprises and will eventually replace reserved instances as AWS’ de facto committed pricing model.”

The new year will see enterprises increasingly seek to optimize their costs, not just manage and report on them, and Savings Plans fit into this expectation, Rogers added.

If your enterprise hasn’t gotten serious about cloud cost management, doing so would be a good New Year’s resolution. There’s a general prescription for success in doing so, according to Corey Quinn, cloud economist at the Duckbill Group.

“Understand the goals you’re going after,” Quinn said. “What are the drivers behind your business?” Break down cloud bills into what they mean on a division, department and team-level basis. It’s also wise to start with the big numbers, Quinn said. “You need to understand that line item that makes up 40% of your bill.”

While some companies try to make cloud cost savings the job of many people across finance and IT, in most cases the responsibility shouldn’t fall on engineers, Quinn added. “You want engineers to focus on whether they can build a thing, and then cost-optimize it,” he said.

Serverless vs. containers debate mounts

One topic that could come with more frequency in 2020 is the debate over the relative merits of serverless computing versus containers.

Serverless advocates such as Tim Wagner, inventor of AWS Lambda, contend that a movement is underfoot.

At re:Invent, the serverless features AWS launched were not “coolness for the already-drank-the-Kool-Aid crowd,” Wagner said in a recent Medium post. “This time, AWS is trying hard to win container users over to serverless. It’s the dawn of a new ‘hybrid’ era.”

Another serverless expert hailed Wagner’s stance.

“I think the container trend, at its most mature state, will resemble the serverless world in all but execution duration,” said Ryan Marsh, a DevOps trainer with TheStack.io in Houston.

Anything that allows companies to maintain the feeling of isolated and independent deployable components … is going to see adoption.
Ryan MarshDevOps trainer, TheStack.io

The containers vs. serverless debate has raged for at least a couple of years, and the notion that neither approach can effectively answer every problem persists. But observers such as Wagner and Marsh believe that advances in serverless tooling will shift the discussion.

AWS Fargate for EKS (Elastic Kubernetes Service) became available at re:Invent. The offering provides a serverless framework that launches, scales and manages Kubernetes container clusters on AWS. Earlier this year, Google released a similar service called Cloud Run.

The services will likely gain popularity as customers deeply invested in containers see the light, Marsh said.

“I turned down too many clients last year that had container orchestration problems. That’s frankly a self-inflicted and uninteresting problem to solve in the era of serverless,” he said.

Containers’ allure is understandable. “As a logical and deployable construct, the simplicity is sexy,” Marsh said. “In practice, it is much more complicated.”

“Anything that allows companies to maintain the feeling of isolated and independent deployable components — mimicking our warm soft familiar blankie of a VM — with containers, but removes the headache, is going to see adoption,” he added.

Go to Original Article
Author:

Pivot3, Scale Computing HCI appliances zoom in on AI, edge

Hyper-converged vendors Pivot3 and Scale Computing this week expanded their use cases with product launches.

Scale formally unveiled HE150 all-flash NVMe hyper-converged infrastructure (HCI) appliances for space-constrained edge environments. Scale sells the compute device as a three-node cluster, but it does not require a server rack.

The new device is a tiny version of the Scale HE500 HCI appliances that launched this year. HE150 measures 4.6 inches wide, 1.7 inches high and 4.4 inches deep. Scale said select customers have deployed proofs of concept.

Pivot3 rolled out AI-enabled data protection in its Acuity HCI operating software. The vendor said Pivot3 appliances can stream telemetry data from customer deployments to the vendor’s support cloud for historical analysis and troubleshooting.

HCI use cases evolve

Hyper-converged infrastructure vendors package the disparate elements of converged infrastructure in a single piece of hardware, including compute, hypervisor software, networking and storage.

Dell is the HCI market leader, in large measure to VMware vSAN, while HCI pioneer Nutanix holds the No. 2 spot. But competition is heating up. Server vendors Cisco and Hewlett Packard Enterprise have HCI products, as does NetApp with a product using its SolidFire all-flash technology. Ctera Networks, DataCore and startup Datrium are also trying to elbow into the crowded space.

Pivot3 storage is used mostly for video surveillance, although the Austin, Texas-based vendor has focused on increasing its deal size for its Acuity systems.

Scale Computing, based in Indianapolis, sells the HC3 virtualization platform for use in edge and remote office deployments. The company has customers in education, financial services, government, healthcare and retail.

Hyper-converged infrastructure has expanded beyond its origins in virtual desktop infrastructure to support cloud analytics of primary and secondary storage, said Eric Sheppard, a research vice president in IDC’s infrastructure systems, platforms and technologies group.

“The most common use of HCI is virtualized applications, but the percentage of [hosted] apps that are mission-critical has increased considerably,” Sheppard said.

Scale HE150: Small gear for the edge

Scale’s HC3 system is designed for Linux-based KVM. Unlike most HCI appliances, Scale HC3 does not support VMware. Scale designed HyperCore to run Linux-based KVM.

Scale Computing's mini-HCI appliance
Scale Computing HE150 HCI appliance

The HE150 includes a full version of HyperCore operating system, including rolling updates, replication and snapshots. The device comes with up to six cores and up to 64 GB of RAM. Intel’s Frost Canyon Next Unit of Computing (NUC) mini-PC provides the compute. Storage per nodes is up to 2 TB with one M.2 NVMe SSD.

Traditional HCI appliances require a dedicated backplane switch to route network traffic, including Scale’s larger HC3 appliances. HE150 features new HC3 Edge Fabric software-based tunneling for communication between HC3 nodes. The tunneling is needed to accommodate the tiny form factor, said Dave Demlow, Scale’s VP of product management.

Scale recommends a three-node HE150 cluster. Data is mirrored twice between the nodes for redundancy. Demlow said the cluster takes up the space of three smart phones stacked together.

Eric Slack, a senior analyst at Evaluator Group, said Scale’s operating system enables it to sell an HCI appliance the size of Scale HE150.

“This new small device runs the full Scale HyperCore OS, which is an important feature. Scale stack is pretty thin. They don’t run VMware or a separate software-defined storage layer, so HyperCore can run with limited memory and a limited number of CPU cores,” Slack said.

Pivot3 HCI appliances

Pivot3 did not make hardware upgrades with this release. The features in Acuity center on AI-driven analytics for more automated management.

Pivot3 enhanced its Intelligence Engine policy manager with AI tools for backup and disaster recovery in multi-petabyte storage. The move comes amid research by IDC that indicates more enterprises expect HCI vendors to provide autonomous management via the cloud.

The IDC survey of 252 data centers found that 89% rely on cloud-based predictive analytics to manage IT infrastructure, but only 72% had enterprise storage systems that bundle analytics tools as part of the base price.

“The entirety of the data center infrastructure market is increasing the degree to which tasks can be automated. All roads lead toward autonomous operations, and cloud-based predictive analytics is the fastest way to get there,” Sheppard said.

Pivot3 said it added self-healing to identify failed nodes and automatically returns repaired nodes to the cluster. The vendor also added delta differencing to its erasure coding for faster rebuilds.

Go to Original Article
Author:

What admins need to know about Azure Stack HCI

Despite all the promise of cloud computing, it remains out of reach for administrators who cannot, for different reasons, migrate out of the data center.

Many organizations still grapple with concerns, such as compliance and security, that weigh down any aspirations to move workloads from on-premises environments. For these organizations, hyper-converged infrastructure (HCI) products have stepped in to approximate some of the perks of the cloud, including scalability and high availability. In early 2019, Microsoft stepped into this market with Azure Stack HCI. While it was a new name, it was not an entirely new concept for the company.

Some might see Azure Stack HCI as a mere rebranding of the existing Windows Server Software-Defined (WSSD) program, but there are some key differences that warrant further investigation from shops that might benefit from a system that integrates with the latest software-defined features in the Windows Server OS.

What distinguishes Azure Stack HCI from Azure Stack?

When Microsoft introduced its Azure Stack HCI program in March 2019, there was some initial confusion from many in IT. The company offered a similarly named product called Azure Stack, which uses the name of Microsoft’s cloud platform, to run a version of Azure inside the data center.

Microsoft developed Azure Stack HCI for local VM workloads that run on Windows Server 2019 Datacenter edition. While not explicitly tied to the Azure cloud, organizations that use Azure Stack HCI can connect to Azure for hybrid services, such as Azure Backup and Azure Site Recovery.

Azure Stack HCI offerings use OEM hardware from vendors such as Dell, Fujitsu, Hewlett Packard Enterprise and Lenovo that is validated by Microsoft to capably run the range of software-defined features in Windows Server 2019.

How is Azure Stack HCI different from the WSSD program?

While Azure Stack is essentially an on-premises version of the Microsoft cloud computing platform, its approximate namesake, Azure Stack HCI, is more closely related to the WSSD program that Microsoft launched in 2017.

Microsoft made its initial foray into the HCI space with its WSSD program, which utilized the software-defined features in the Windows Server 2016 Datacenter edition on hardware validated by Microsoft.

For Azure Stack HCI, Microsoft uses the Windows Server 2019 Datacenter edition as the foundation of this product with updated software-defined functionality compared to Windows Server 2016.

Windows Server gives administrators the virtualization layers necessary to avoid the management and deployment issues related to proprietary hardware. Windows Server’s software-defined storage, networking and compute capabilities enable organizations to more efficiently pool the hardware resources and use centralized management to sidestep traditional operational drawbacks.

For Azure Stack HCI, Microsoft uses the Windows Server 2019 Datacenter edition as the foundation of this product with updated software-defined functionality compared to Windows Server 2016. For example, Windows Server 2019 offers expanded pooled storage of 4 petabytes in Storage Spaces Direct, compared to 1 PB on Windows Server 2016. Microsoft also updated the clustering feature in Windows Server 2019 for improved workload resiliency and added data deduplication to give an average of 10 times more storage capacity than Windows Server 2016.

What are the deployment and management options?

The Azure Stack HCI product requires the use of the Windows Server 2019 Datacenter edition, which the organization might get from the hardware vendor for a lower cost than purchasing it separately.

To manage the Azure Stack HCI system, Microsoft recommends using Windows Admin Center, a relatively new GUI tool developed as the potential successor to Remote Server Administration Tools, Microsoft Management Console and Server Manager. Microsoft tailored Windows Admin Center for smaller deployments, such as Azure Stack HCI.

Windows Admin Center drive dashboard
The Windows Admin Center server management tool offers a dashboard to check on the drive performance for issues related to latency or when a drive fails.

Windows Admin Center encapsulates a number of traditional server management utilities for routine tasks, such as registry edits, but it also handles more advanced functions, such as the deployment and management of Azure services, including Azure Network Adapter for companies that want to set up encryption for data transmitted between offices.

Companies that purchase an Azure Stack HCI system get Windows Server 2019 for its virtualization technology that pools storage and compute resources from two nodes up to 16 nodes to run VMs on Hyper-V. Microsoft positions Azure Stack HCI as an ideal system for multiple scenarios, such as remote office/branch office and VDI, and for use with data-intensive applications, such as Microsoft SQL Server.

How much does it cost to use Azure Stack HCI?

The Microsoft Azure Stack HCI catalog features more than 150 models from 20 vendors. A general-purpose node will cost about $10,000, but the final price will vary depending on the level of customization the buyer wants.

There are multiple server configuration options that cover a range of processor models, storage types and networking. For example, some nodes have ports with 1 Gigabit Ethernet, 10 GbE, 25 GbE and 100 GbE, while other nodes support a combination of 25 GbE and 10 GbE ports. Appliances optimized for better performance that use all-flash storage will cost more than units with slower, traditional spinning disks.

On top of the price of the hardware is the annual maintenance and support fees that are typically a percentage of the purchase price of the appliance.

If a company opts to tap into the Azure cloud for certain services, such as Azure Monitor to assist with operational duties by analyzing data from applications to determine if a problem is about to occur, then additional fees will come into play. Organizations that remain fixed with on-premises use for their Azure Stack HCI system will avoid these extra costs.

Go to Original Article
Author:

AWS moves into quantum computing services with Braket

Amazon debuted a preview version of its quantum computing services this week, along with a new quantum computing research center and lab where AWS cloud users can work with quantum experts to identify practical, short-term applications.

The new AWS quantum computing managed service, called Amazon Braket, is aimed initially at scientists, researchers and developers, giving them access to quantum systems provided by IonQ, D-Wave and Rigetti.

Amazon’s quantum computing services news comes less than a month after Microsoft disclosed it is developing a chip capable of running quantum software. Microsoft also previewed a version of its Azure Quantum Service and struck partnerships with IonQ and Honeywell to help deliver the Azure Quantum Service.

In November, IBM said its Qiskit QC development framework supports IonQ’s ion trap technology, used by IonQ and Alpine Quantum Technologies.

Google recently claimed it was the first quantum vendor to achieve quantum supremacy — the ability to solve complex problems that classical systems either can’t solve or would take them an extremely long time to solve. Company officials said it represented an important milestone.

In that particular instance, Google’s Sycamore processor solved a difficult problem in just 200 seconds — a problem that would take a classical computer 10,000 years to solve. The claim was met with a healthy amount of skepticism by some competitors and other more objective sources as well. Most said they would reserve judgement on the results until they could take a closer look at the methodology involved.

Cloud services move quantum computing forward

Peter Chapman, CEO and president of IonQ, doesn’t foresee any conflicts with his respective agreements with rivals Microsoft and AWS. AWS jumping into the fray with Microsoft and IBM will help push quantum computing closer to the limelight and make users more aware of the technology’s possibilities, he said.

“There’s no question AWS’s announcements give greater visibility to what’s going on with quantum computing,” Chapman said. “Over the near term they are looking at hybrid solutions, meaning they will mix quantum and classical algorithms making [quantum development software] easier to work with,” he said.

There’s no question AWS’s announcements will give greater visibility to what’s going on with quantum computing.
Peter ChapmanCEO and president, IonQ

Microsoft and AWS are at different stages of development, making it difficult to gauge which company has advantages over the other. But what Chapman does like about AWS right now is the set of APIs that allows a developer’s application to run across the different quantum architectures of IonQ (ion trap), D-Wave (quantum annealing) and Rigetti (superconducting chips).

“At the end of the day it’s not how many qubits your system has,” Chapman said. “If your application doesn’t run on everyone’s hardware, users will be disappointed. That’s what is most important.”

Another analyst agreed that the sooner quantum algorithms can be melded with classical algorithms to produce something useful in an existing corporate IT environment, the faster quantum computing will be accepted.

“If you have to be a quantum expert to produce anything meaningful, then whatever you do produce stays in the labs,” said Frank Dzubeck, president of Communications Network Architects, Inc. “Once you integrate it with the classical world and can use it as an adjunct for what you are doing right now, that’s when [quantum technology] grows like crazy.”

Microsoft’s Quantum Development Kit, which the company open sourced earlier this year, also allows developers to create applications that operate across a range of different quantum architectures. Like AWS, Microsoft plans to combine quantum and classical algorithms to produce applications and services aimed at the scientific markets and ones that work on existing servers.

One advantage AWS and Microsoft provide for smaller quantum computing companies like IonQ, according to Chapman, is not just access to their mammoth user bases, but support for things like billing.

“If customers want to run something on our computers, they can just go to their dashboard and charge it to their AWS account,” Chapman said. “They don’t need to set up an account with us. We also don’t have to spend tons of time on the sales side convincing Fortune 1000 users to make us an approved vendor. Between the two of them [Microsoft and AWS], they have the whole world signed up as approved vendors,” he said.

The mission of the AWS Center for Quantum Computing will be to solve longer-term technical problems using quantum computers. Company officials said they have users ready to begin experimenting with the newly minted Amazon Braket but did not identify any users by name.

The closest they came was a prepared statement by Charles Toups, vice president and general manager of Boeing’s Disruptive Computing and Networks group. The company is investigating how quantum computing, sensing and networking technologies can enhance Boeing products and services for its customers, according to the statement.

“Quantum engineering is starting to make more meaningful progress and users are now asking for ways to experiment and explore the technology’s potential,” said Charlie Bell, senior vice president with AWS’s Utility Computing Services group.

AWS’s assumption going forward is quantum computing will be a cloud-first technology, which will be the way AWS will provide its users with their first quantum experience via Amazon Braket and the Quantum Solutions Lab.

Corporate and third-party developers can create their own customized algorithms with Braket, which gives them the option of executing either low-level quantum circuits or fully-managed hybrid algorithms. This makes it easier to choose between software simulators and whatever quantum hardware they select.

The AWS Center for Quantum Computing is based at Caltech, which has long invested in both experimental and theoretical quantum science and technology.

Go to Original Article
Author:

New Azure HPC and partner offerings at Supercomputing 19

For more than three decades, the researchers and practitioners that make up the high-performance computing (HPC) community will come together for their annual event. More than ten thousand strong in attendance, the global community will converge on Denver, Colorado to advance the state-of-the-art in HPC. The theme for Supercomputing ‘19 is “HPC is now” – a theme that resonates strongly with the Azure HPC team given the breakthrough capabilities we’ve been working to deliver to customers.

Azure is upending preconceptions of what the cloud can do for HPC by delivering supercomputer-grade technologies, innovative services, and performance that rivals or exceeds some of the most optimized on-premises deployments. We’re working to ensure Azure is paving the way for a new era of innovation, research, and productivity.

At the show, we’ll showcase Azure HPC and partner solutions, benchmarking white papers and customer case studies – here’s an overview of what we’re delivering.

  • Massive Scale MPI – Solve problems at the limits of your imagination, not the limits of other public cloud’s commodity network. Azure supports your tightly coupled workloads up to 80,000 cores per job featuring the latest HPC-grade CPUs and ultra-low latency InfiniBand hDR networking.
  • Accelerated Compute – Choose from the latest GPUs, field programmable gate array (FPGAs), and now IPUs for maximum performance and flexibility across your HPC, AI, and visualization workloads.
  • Apps and Services – Leverage advanced software and storage for every scenario: from hybrid cloud to cloud migration, from POCs to production, from optimized persistent deployments to agile environment reconfiguration. Azure HPC software has you covered.

Azure HPC unveils new offerings

  • The preview of new second gen AMD EPYC based HBv2 Azure Virtual Machines for HPC – HBv2 virtual machines (VMs) deliver supercomputer-class performance, message passing interface (MPI) scalability, and cost efficiency for a variety of real-world HPC workloads. HBv2 VMs feature 120 non-hyperthreaded CPU cores from the new AMD EPYC™ 7742 processor, up to 45 percent more memory bandwidth than x86 alternatives, and up to 4 teraFLOPS of double-precision compute. Leveraging the cloud’s first deployment of 200 Gigabit HDR InfiniBand from Mellanox, HBv2 VMs support up to 80,000 cores per MPI job to deliver workload scalability and performance that rivals some of the world’s largest and most powerful bare metal supercomputers. HBv2 is not just one of the most powerful HPC servers Azure has ever made, but also one of the most affordable. HBv2 VMs are available now.
  • The preview of new NVv4 Azure Virtual Machines for virtual desktops – NVv4 VMs enhance Azure’s portfolio of Windows Virtual Desktops with the introduction of the new AMD EPYC™ 7742 processor and the AMD RADEON INSTINCT™ MI25 GPUs. NVv4 gives customers more choice and flexibility by offering partitionable GPUs. Customers can select a virtual GPU size that is right for their workloads and price points, with as little as 2GB of dedicated GPU frame buffer for an entry-level desktop in the cloud, up to an entire MI25 GPU with 16GB of HBM2 memory to provide powerful workstation-class experience. NVv4 VMs are available now in preview.
  • The preview NDv2 Azure GPU Virtual Machines – NDv2 VMs are the latest, fastest, and most powerful addition to Azure GPU family, and are designed specifically for the most demanding distributed HPC, artificial intelligence (AI), and machine learning (ML) workloads. These VMs feature 8 NVIDIA Tesla V100 NVLink interconnected GPUs each with 32 GB of HBM2 memory, 40 non-hyperthreaded cores from the Intel Xeon Platinum 8168 processor, and 672 GiB of system memory. NDv2-series virtual machines also now feature 100 Gigabit EDR InfiniBand from Mellanox with support for standard OFED drivers and all MPI types and versions. NDv2-series virtual machines are ready for the most demanding machine learning models and distributed AI training workloads with out-of-box NCCL2 support for InfiniBand- allowing for easy scale-up to supercomputer-sized clusters that can run workloads utilizing CUDA, including popular ML tools and  frameworks. NDv2 VMs are available now in preview.
  • Azure HPC Cache now available – When it comes to file performance, the new Azure HPC Cache delivers flexibility and scale. Use this service right from the Azure Portal to connect high-performance computing workloads to on-premises network-attached storage or run Azure Blob as the portable operating system interface.
  • The preview of new NDv3 Azure Virtual Machines for AI –The preview of NDv3 VMs are Azure’s first offering featuring the Graphcore IPU, designed from the ground up for AI training and inference workloads. The IPU novel architecture enables high-throughput processing of neural networks even at small batch sizes, which accelerates training convergence and enables short inference latency. With the launch of NDv3, Azure is bringing the latest in AI silicon innovation to the public cloud and giving customers additional choice in how they develop and run their massive scale AI training and inference workloads. NDv3 VMs feature 16 IPU chips, each with over 1200 cores and over 7000 independent threads and a large 300MB on-chip memory that delivers up to 45 TB/s of memory bandwidth. The 8 Graphcore accelerator cards are connected through a high-bandwidth, low-latency interconnect enabling large models to be trained in a model-parallel (including pipelining) or data parallel way. An NDv3 VM also includes 40 cores of CPU, backed by Intel Xeon Platinum 8168 processors and 768 GB of memory. Customers can develop applications for IPU technology using Graphcore’s Poplar® software development toolkit, and leverage the IPU-compatible versions of popular machine learning frameworks. NDv3 VMs are now available in preview.
  • NP Azure Virtual Machines for HPC coming soon – Our Alveo U250 FPGA-Accelerated VMs offer from 1-to-4 Xilinx U250 FPGA devices as an Azure VM- backed by powerful Xeon Platinum CPU cores, and fast NVMe-based storage. The NP series will enable true lift-and-shift and single-target development of FPGA applications for a general purpose cloud. Based on a board and software ecosystem customers can buy today, RTL and high-level language designs targeted at Xilinx’s U250 card and SDAccel 2019.1 runtime will run on Azure VMs just as they do on-premises and on the edge, enabling the bleeding edge of accelerator development to harness the power of the cloud without additional development costs.

  • Azure CycleCloud 7.9 Update We are excited to announce the release of Azure CycleCloud 7.9. Version 7.9 focuses on improved operational clarity and control, in particular for large MPI workloads on Azure’s unique InfiniBand interconnected infrastructure. Among many other improvements, this release includes:

    • Improved error detection and reporting user interface (UI) that greatly simplify diagnosing VM issues.

    • ​Node time-to-live capabilities via a “Keep-alive” function, allowing users to build and debug MPI applications that are not affected by autoscaling policies.

    • VM placement group management through the UI that provides users direct control into node topology for latency sensitive applications.

    • Support for Ephemeral OS disks, which improve virtual machines and virtual machines scale sets start-up performance and cost.

  • Microsoft HPC Pack 2016, Update 3 – released in August, Update 3 includes significant performance and reliability improvements, support for Windows Server 2019 in Azure, a new VM extension for deploying Azure IaaS Windows nodes, and many other features, fixes, and improvements.

In all of our new offerings and alongside our partners, Azure HPC aims to consistently offer the latest in capabilities for HPC oriented use cases. Together with our partners Intel, AMD, Mellanox, NVIDIA, Graphcore, Xilinx, and many more, we look forward to seeing you next week in Denver!

Go to Original Article
Author: Microsoft News Center

Microsoft’s new approach to hybrid: Azure services when and where customers need them | Innovation Stories

As business computing needs have grown more complex and sophisticated, many enterprises have discovered they need multiple systems to meet various requirements – a mix of technology environments in multiple locations, known as hybrid IT or hybrid cloud.

Technology vendors have responded with an array of services and platforms – public clouds, private clouds and the growing edge computing model – but there hasn’t necessarily been a cohesive strategy to get them to work together.

We got here in an ad hoc fashion,” said Erik Vogel, global vice president for customer experience for HPE GreenLake at Hewlett Packard Enterprise. Customers didn’t have a strategic model to work from.

Instead, he said, various business owners in the same company may have bought different software as a service (SaaS) applications, or developers may have independently started leveraging Amazon Web Services, Azure or Google Cloud Platform to develop a set of applications.

At its Ignite conference this week in Orlando, Florida, Microsoft announced its solution to such cloud sprawl. The company has launched a preview of Azure Arc, which offers Azure services and management to customers on other clouds or infrastructure, including those offered by Amazon and Google.

John JG Chirapurath, general manager for Azure data, blockchain and artificial intelligence at Microsoft, said the new service is both an acknowledgement of, and a response to, the reality that many companies face today. They are running various parts of their businesses on different cloud platforms, and they also have a lot of data stored on their own new or legacy systems.

In all those cases, he said, these customers are telling Microsoft they could use the benefits of Azure cloud innovation whether or not their data is stored in the cloud, and they could benefit from having the same Azure capabilities – including security safeguards – available to them across their entire portfolio.

We are offering our customers the ability to take their services, untethered from Azure, and run them inside their own datacenter or in another cloud,” Chirapurath said.

Microsoft says Azure Arc builds on years of work the company has done to serve hybrid cloud needs. For example, Azure Resource Manager, released in 2014, was created with the vision that it would manage resources outside of Azure, including in companies’ internal servers and on other clouds.

That flexibility can help customers operate their services on a mix of clouds more efficiently, without purchasing new hardware or switching among cloud providers. Companies can use a public cloud to obtain computing power and data storage from an outside vendor, but they can also house critical applications and sensitive data on their own premises in a private cloud or server.

Then there’s edge computing, which stores data where the user is, in between the company and the public cloud for example, on their customers’ mobile devices or on sensors in smart buildings like hospitals and factories.

YouTube Video

That’s compelling for companies that need to run AI models on systems that aren’t reliably connected to the cloud, or to make computations more quickly than if they had to send large amounts of data to and from the cloud. But it also must work with companies’ cloud-based, internet-connected systems.

“A customer at the edge doesn’t want to use different app models for different environments,” said Mark Russinovich, Azure chief technology officer. “They need apps that span cloud and edge, leveraging the same code and same management constructs.”

Streamlining and standardizing a customer’s IT structure gives developers more time to build applications that produce value for the business instead of managing multiple operating models. And enabling Azure to integrate administrative and compliance needs across the enterprise – automating system updates and security enhancements brings additional savings in time and money.

“You begin to free up people to go work on other projects, which means faster development time, faster time to market,” said HPE’s Vogel. HPE is working with Microsoft on offerings that will complement Azure Arc.

Arpan Shah, general manager of Azure infrastructure, said Azure Arc allows companies to use Azure’s governance tools for their virtual machines, Kubernetes clusters and data across different locations, helping ensure companywide compliance on things like regulations, security, spending policies and auditing tools.

Azure Arc is underpinned in part by Microsoft’s commitment to technologies that customers are using today, including virtual machines, containers and Kubernetes, an open source system for organizing and managing containers. That makes clusters of applications easily portable across a hybrid IT environment – to the cloud, the edge or an internal server.

“It’s easy for a customer to put that container anywhere,” Chirapurath said. “Today, you can keep it here. Tomorrow, you can move it somewhere else.”

Microsoft says these latest Azure updates reflect an ongoing effort to better understand the complex needs of customers trying to manage their Linux and Windows servers, Kubernetes clusters and data across environments.

“This is just the latest wave of this sort of innovation,” Chirapurath said. “We’re really thinking much more expansively about customer needs and meeting them according to how they’d like to run their applications and services.”

Top image: Erik Vogel, global vice president for customer experience for HPE GreenLake at Hewlett Packard Enterprise, with a prototype of memory-driven computing. HPE is working with Microsoft on offerings that will complement Azure Arc. Photo by John Brecher for Microsoft.

Related:

Go to Original Article
Author: Microsoft News Center

Ecstasy programming language targets cloud-native computing

While recent events have focused on Java and how it will fare as computing continues to evolve to support modern platforms and technologies, a new language is targeted directly at the cloud-native computing space — something Java continues to adjust to.

This new language, known as the Ecstasy programming language, aims to address programming complexity and to enhance security and manageability in software, which are key challenges for cloud app developers.

Oracle just completed its Oracle Open World and Oracle Code One conferences, where Java was dominant. Indeed, Oracle Code One was formerly known as JavaOne until last year, when Oracle changed its name to be more inclusive of other languages.

Ironically, Cameron Purdy, a former senior vice president of development at Oracle and now CEO of Xqiz.it (pronounced “exquisite”), based in Lexington, Mass., is the co-creator of the Ecstasy language. Purdy joined Oracle in 2007, when the database giant acquired his previous startup, Tangosol, to attain its Coherence in-memory data grid technology, which remains a part of Oracle’s product line today.

Designed for containerization and the cloud-native computing era

Purdy designed Ecstasy for what he calls true containerization. It will run on a server, in a VM or in an OS container, but that is not the kind of container that Ecstasy containerization refers to. Ecstasy containers are a feature of the language itself, and they are secure, recursive, dynamic and manageable runtime containers, he said.

For security, all Ecstasy code runs inside an Ecstasy container, and Ecstasy code cannot even see the container it’s running inside of — let alone anything outside that container, like the OS, or even another container. Regarding recursivity, Ecstasy code can create nested containers inside the current container, and the code running inside those containers can create their own containers, and so on. For dynamism, containers can be created and destroyed dynamically, but they also can grow and shrink within a common, shared pool of CPU and memory resources. For manageability, any resources — including CPU, memory, storage and any I/O — consumed by an Ecstasy container can be measured and managed in real time. And all the resources within a container — including network and storage — can be virtualized, with the possibility of each container being virtualized in a completely different manner.

Overall, the goal of Ecstasy is to solve a set of problems that are intrinsic to the cloud:

  • the ability to modularize application code, so that some portions could be run all the way out on the client, or all the way back in the heart of a server cluster, or anywhere in-between — including on shared edge and CDN servers;
  • to make code that is portable and reusable across all those locations and devices;
  • to be able to securely reuse code by supporting the secure containerization of arbitrary modules of code;
  • to enable developers to manage and virtualize the resources used by this code to enhance security, manageability, real-time monitoring and cloud portability; and
  • to provide an architecture that would scale with the cloud but could also scale with the many core devices and specialized processing units that lie at the heart of new innovation — like machine learning.

General-purpose programming language

Ecstasy, like C, C++, Java, C# and Python, is a general-purpose programming language — but its most compelling feature is not what it contains, but rather what it purposefully omits, Purdy said.

For instance, all the aforementioned general-purpose languages adopted the underlying hardware architecture and OS capabilities as a foundation upon which they built their own capabilities, but additionally, these languages all exposed the complexity of the underlying hardware and OS details to the developer. This not only added to complexity, but also provided a source of vulnerability and deployment inflexibility.

As a general-purpose programming language, Ecstasy will be useful for most application developers, Purdy said. However, Xqiz.it is still in “stealth” mode as a company and in the R&D phase with the language. Its design targets all the major client device hardware and OSes, all the major cloud vendors, and all of the server back ends.

“We designed the language to be easy to pick up for anyone who is familiar with the C family of languages, which includes Java, C# and C++,” he said. “Python and JavaScript developers are likely to recognize quite a few language idioms as well.”

Ecstasy is not a superset of Java, but [it] definitely [has] a large syntactic intersection. Ecstasy adds lots and lots onto Java to improve both developer productivity, as well as program correctness.
Mark FalcoSenior principal software development engineer, Workday

Ecstasy is heavily influenced by Java, so Java programmers should be able to read lots of Ecstasy code without getting confused, said Mark Falco, a senior principal software development engineer at Workday who has had early access to the software.

“To be clear, Ecstasy is not a superset of Java, but [it] definitely [has] a large syntactic intersection,” Falco said. “Ecstasy adds lots and lots onto Java to improve both developer productivity, as well as program correctness.” The language’s similarity to Java also should help with developer adoption, he noted.

However, Patrick Linskey, a principal engineer at Cisco and another early Ecstasy user, said, “From what I’ve seen, there’s a lot of Erlang/OTP in there under the covers, but with a much more accessible syntax.” Erlang/OTP is a development environment for concurrent programming.

Falco added, “Concurrent programming in Ecstasy doesn’t require any notion of synchronization, locking or atomics; you always work on your local copy of a piece of data, and this makes it much harder to screw things up.”

Compactness, security and isolation

Moreover, a few key reasons for creating a new programming language for serverless, cloud and connected devices apps are their compactness, security and isolation, he added.

“Ecstasy starts off with complete isolation at its core; an Ecstasy app literally has no conduit to the outside world, not to the network, not to the disk, not to anything at all,” Falco said. “To gain access to any aspect of the outside world, an Ecstasy app must be injected with services that provide access to only a specific resource.”

“The Ecstasy runtime really pushes developers toward safe patterns, without being painful,” Linskey said. “If you tried to bolt an existing language onto such a runtime, you’d end up with lots of tough static analysis checks, runtime assertions” and other performance penalties.

Indeed, one of the more powerful components of Ecstasy is the hard separation of application logic and deployment, noted Rob Lee, another early Ecstasy user who is vice president and chief architect at Pure Storage in Mountain View, Calif. “This allows developers to focus on building the logic of their application — what it should do and how it should do it, rather than managing the combinatorics of details and consequences of where it is running,” he noted.

What about adoption?

However, adoption will be the “billion-dollar” issue for the Ecstasy programming language, Lee said, noting that he likes the language’s chances based on what he’s seen. Yet, building adoption for a new runtime and language requires a lot of careful and intentional community-building.

Cisco is an easy potential candidate for Ecstasy usage, Linskey said. “We build a lot of middlebox-style services in which we pull together data from a few databases and a few internal and external services and serve that up to our clients,” he said. “An asynchronous-first runtime with the isolation and security properties of Ecstasy would be a great fit for us.”

Meanwhile, Java aficionados expect that Java will continue to evolve to meet cloud-native computing needs and future challenges. At Oracle Code One, Stewart Bryson, CEO of Red Pill Analytics in Atlanta, said he believes Java has another 10 to 20 years of viability, but there is room for another language that will better enable developers for the cloud. However, that language could be one that runs on the Java Virtual Machine, such as Kotlin, Scala, Clojure and others, he said.

Go to Original Article
Author:

Analyzing data from space – the ultimate intelligent edge scenario – The Official Microsoft Blog

Space represents the next frontier for cloud computing, and Microsoft’s unique approach to partnerships with pioneering companies in the space industry means together we can build platforms and tools that foster significant leaps forward, helping us gain deeper insights from the data gleaned from space.

One of the primary challenges for this industry is the sheer amount of data available from satellites and the infrastructure required to bring this data to ground, analyze the data and then transport it to where it’s needed. With almost 3,000 new satellites forecast to launch by 20261 and a threefold increase in the number of small satellite launches per year, the magnitude of this challenge is growing rapidly.

Essentially, this is the ultimate intelligent edge scenario – where massive amounts of data must be processed at the edge – whether that edge is in space or on the ground. Then the data can be directed to where it’s needed for further analytics or combined with other data sources to make connections that simply weren’t possible before.

DIU chooses Microsoft and Ball Aerospace for space analytics

To help with these challenges, the Defense Innovation Unit (DIU) just selected Microsoft and Ball Aerospace to build a solution demonstrating agile cloud processing capabilities in support of the U.S. Air Force’s Commercially Augmented Space Inter Networked Operations (CASINO) project.

With the aim of making satellite data more actionable more quickly, Ball Aerospace and Microsoft teamed up to answer the question: “what would it take to completely transform what a ground station looks like, and downlink that data directly to the cloud?”

The solution involves placing electronically steered flat panel antennas on the roof of a Microsoft datacenter. These phased array antennas don’t require much power and need only a couple of square meters of roof space. This innovation can connect multiple low earth orbit (LEO) satellites with a single antenna aperture, significantly accelerating the delivery rate of data from satellite to end user with data piped directly into Microsoft Azure from the rooftop array.

Analytics for a massive confluence of data

Azure provides the foundational engine for Ball Aerospace algorithms in this project, processing worldwide data streams from up to 20 satellites. With the data now in Azure, customers can direct that data to where it best serves the mission need, whether that’s moving it to Azure Government to meet compliance requirements such as ITAR or combining it with data from other sources, such as weather and radar maps, to gain more meaningful insights.

In working with Microsoft, Steve Smith, Vice President and General Manager, Systems Engineering Solutions at Ball Aerospace called this type of data processing system, which leverages Ball phased array technology and imagery exploitation algorithms in Azure, “flexible and scalable – designed to support additional satellites and processing capabilities. This type of data processing in the cloud provides actionable, relevant information quickly and more cost-effectively to the end user.”

With Azure, customers gain its advanced analytics capabilities such as Azure Machine Learning and Azure AI. This enables end users to build models and make predictions based on a confluence of data coming from multiple sources, including multiple concurrent satellite feeds. Customers can also harness Microsoft’s global fiber network to rapidly deliver the data to where it’s needed using services such as ExpressRoute and ExpressRoute Global Reach. In addition, ExpressRoute now enables customers to ingest satellite data from several new connectivity partners to address the challenges of operating in remote locations.

For tactical units in the field, this technology can be replicated to bring information to where it’s needed, even in disconnected scenarios. As an example, phased array antennas mounted to a mobile unit can pipe data directly into a tactical datacenter or Data Box Edge appliance, delivering unprecedented situational awareness in remote locations.

A similar approach can be used for commercial applications, including geological exploration and environmental monitoring in disconnected or intermittently connected scenarios. Ball Aerospace specializes in weather satellites, and now customers can more quickly get that data down and combine it with locally sourced data in Azure, whether for agricultural, ecological, or disaster response scenarios.

This partnership with Ball Aerospace enables us to bring satellite data to ground and cloud faster than ever, leapfrogging other solutions on the market. Our joint innovation in direct satellite-to-cloud communication and accelerated data processing provides the Department of Defense, including the Air Force, with entirely new capabilities to explore as they continue to advance their mission.

  1. https://www.satellitetoday.com/innovation/2017/10/12/satellite-launches-increase-threefold-next-decade/

Tags: ,

Go to Original Article
Author: Microsoft News Center