Tag Archives: computing

What admins need to know about Azure Stack HCI

Despite all the promise of cloud computing, it remains out of reach for administrators who cannot, for different reasons, migrate out of the data center.

Many organizations still grapple with concerns, such as compliance and security, that weigh down any aspirations to move workloads from on-premises environments. For these organizations, hyper-converged infrastructure (HCI) products have stepped in to approximate some of the perks of the cloud, including scalability and high availability. In early 2019, Microsoft stepped into this market with Azure Stack HCI. While it was a new name, it was not an entirely new concept for the company.

Some might see Azure Stack HCI as a mere rebranding of the existing Windows Server Software-Defined (WSSD) program, but there are some key differences that warrant further investigation from shops that might benefit from a system that integrates with the latest software-defined features in the Windows Server OS.

What distinguishes Azure Stack HCI from Azure Stack?

When Microsoft introduced its Azure Stack HCI program in March 2019, there was some initial confusion from many in IT. The company offered a similarly named product called Azure Stack, which uses the name of Microsoft’s cloud platform, to run a version of Azure inside the data center.

Microsoft developed Azure Stack HCI for local VM workloads that run on Windows Server 2019 Datacenter edition. While not explicitly tied to the Azure cloud, organizations that use Azure Stack HCI can connect to Azure for hybrid services, such as Azure Backup and Azure Site Recovery.

Azure Stack HCI offerings use OEM hardware from vendors such as Dell, Fujitsu, Hewlett Packard Enterprise and Lenovo that is validated by Microsoft to capably run the range of software-defined features in Windows Server 2019.

How is Azure Stack HCI different from the WSSD program?

While Azure Stack is essentially an on-premises version of the Microsoft cloud computing platform, its approximate namesake, Azure Stack HCI, is more closely related to the WSSD program that Microsoft launched in 2017.

Microsoft made its initial foray into the HCI space with its WSSD program, which utilized the software-defined features in the Windows Server 2016 Datacenter edition on hardware validated by Microsoft.

For Azure Stack HCI, Microsoft uses the Windows Server 2019 Datacenter edition as the foundation of this product with updated software-defined functionality compared to Windows Server 2016.

Windows Server gives administrators the virtualization layers necessary to avoid the management and deployment issues related to proprietary hardware. Windows Server’s software-defined storage, networking and compute capabilities enable organizations to more efficiently pool the hardware resources and use centralized management to sidestep traditional operational drawbacks.

For Azure Stack HCI, Microsoft uses the Windows Server 2019 Datacenter edition as the foundation of this product with updated software-defined functionality compared to Windows Server 2016. For example, Windows Server 2019 offers expanded pooled storage of 4 petabytes in Storage Spaces Direct, compared to 1 PB on Windows Server 2016. Microsoft also updated the clustering feature in Windows Server 2019 for improved workload resiliency and added data deduplication to give an average of 10 times more storage capacity than Windows Server 2016.

What are the deployment and management options?

The Azure Stack HCI product requires the use of the Windows Server 2019 Datacenter edition, which the organization might get from the hardware vendor for a lower cost than purchasing it separately.

To manage the Azure Stack HCI system, Microsoft recommends using Windows Admin Center, a relatively new GUI tool developed as the potential successor to Remote Server Administration Tools, Microsoft Management Console and Server Manager. Microsoft tailored Windows Admin Center for smaller deployments, such as Azure Stack HCI.

Windows Admin Center drive dashboard
The Windows Admin Center server management tool offers a dashboard to check on the drive performance for issues related to latency or when a drive fails.

Windows Admin Center encapsulates a number of traditional server management utilities for routine tasks, such as registry edits, but it also handles more advanced functions, such as the deployment and management of Azure services, including Azure Network Adapter for companies that want to set up encryption for data transmitted between offices.

Companies that purchase an Azure Stack HCI system get Windows Server 2019 for its virtualization technology that pools storage and compute resources from two nodes up to 16 nodes to run VMs on Hyper-V. Microsoft positions Azure Stack HCI as an ideal system for multiple scenarios, such as remote office/branch office and VDI, and for use with data-intensive applications, such as Microsoft SQL Server.

How much does it cost to use Azure Stack HCI?

The Microsoft Azure Stack HCI catalog features more than 150 models from 20 vendors. A general-purpose node will cost about $10,000, but the final price will vary depending on the level of customization the buyer wants.

There are multiple server configuration options that cover a range of processor models, storage types and networking. For example, some nodes have ports with 1 Gigabit Ethernet, 10 GbE, 25 GbE and 100 GbE, while other nodes support a combination of 25 GbE and 10 GbE ports. Appliances optimized for better performance that use all-flash storage will cost more than units with slower, traditional spinning disks.

On top of the price of the hardware is the annual maintenance and support fees that are typically a percentage of the purchase price of the appliance.

If a company opts to tap into the Azure cloud for certain services, such as Azure Monitor to assist with operational duties by analyzing data from applications to determine if a problem is about to occur, then additional fees will come into play. Organizations that remain fixed with on-premises use for their Azure Stack HCI system will avoid these extra costs.

Go to Original Article
Author:

AWS moves into quantum computing services with Braket

Amazon debuted a preview version of its quantum computing services this week, along with a new quantum computing research center and lab where AWS cloud users can work with quantum experts to identify practical, short-term applications.

The new AWS quantum computing managed service, called Amazon Braket, is aimed initially at scientists, researchers and developers, giving them access to quantum systems provided by IonQ, D-Wave and Rigetti.

Amazon’s quantum computing services news comes less than a month after Microsoft disclosed it is developing a chip capable of running quantum software. Microsoft also previewed a version of its Azure Quantum Service and struck partnerships with IonQ and Honeywell to help deliver the Azure Quantum Service.

In November, IBM said its Qiskit QC development framework supports IonQ’s ion trap technology, used by IonQ and Alpine Quantum Technologies.

Google recently claimed it was the first quantum vendor to achieve quantum supremacy — the ability to solve complex problems that classical systems either can’t solve or would take them an extremely long time to solve. Company officials said it represented an important milestone.

In that particular instance, Google’s Sycamore processor solved a difficult problem in just 200 seconds — a problem that would take a classical computer 10,000 years to solve. The claim was met with a healthy amount of skepticism by some competitors and other more objective sources as well. Most said they would reserve judgement on the results until they could take a closer look at the methodology involved.

Cloud services move quantum computing forward

Peter Chapman, CEO and president of IonQ, doesn’t foresee any conflicts with his respective agreements with rivals Microsoft and AWS. AWS jumping into the fray with Microsoft and IBM will help push quantum computing closer to the limelight and make users more aware of the technology’s possibilities, he said.

“There’s no question AWS’s announcements give greater visibility to what’s going on with quantum computing,” Chapman said. “Over the near term they are looking at hybrid solutions, meaning they will mix quantum and classical algorithms making [quantum development software] easier to work with,” he said.

There’s no question AWS’s announcements will give greater visibility to what’s going on with quantum computing.
Peter ChapmanCEO and president, IonQ

Microsoft and AWS are at different stages of development, making it difficult to gauge which company has advantages over the other. But what Chapman does like about AWS right now is the set of APIs that allows a developer’s application to run across the different quantum architectures of IonQ (ion trap), D-Wave (quantum annealing) and Rigetti (superconducting chips).

“At the end of the day it’s not how many qubits your system has,” Chapman said. “If your application doesn’t run on everyone’s hardware, users will be disappointed. That’s what is most important.”

Another analyst agreed that the sooner quantum algorithms can be melded with classical algorithms to produce something useful in an existing corporate IT environment, the faster quantum computing will be accepted.

“If you have to be a quantum expert to produce anything meaningful, then whatever you do produce stays in the labs,” said Frank Dzubeck, president of Communications Network Architects, Inc. “Once you integrate it with the classical world and can use it as an adjunct for what you are doing right now, that’s when [quantum technology] grows like crazy.”

Microsoft’s Quantum Development Kit, which the company open sourced earlier this year, also allows developers to create applications that operate across a range of different quantum architectures. Like AWS, Microsoft plans to combine quantum and classical algorithms to produce applications and services aimed at the scientific markets and ones that work on existing servers.

One advantage AWS and Microsoft provide for smaller quantum computing companies like IonQ, according to Chapman, is not just access to their mammoth user bases, but support for things like billing.

“If customers want to run something on our computers, they can just go to their dashboard and charge it to their AWS account,” Chapman said. “They don’t need to set up an account with us. We also don’t have to spend tons of time on the sales side convincing Fortune 1000 users to make us an approved vendor. Between the two of them [Microsoft and AWS], they have the whole world signed up as approved vendors,” he said.

The mission of the AWS Center for Quantum Computing will be to solve longer-term technical problems using quantum computers. Company officials said they have users ready to begin experimenting with the newly minted Amazon Braket but did not identify any users by name.

The closest they came was a prepared statement by Charles Toups, vice president and general manager of Boeing’s Disruptive Computing and Networks group. The company is investigating how quantum computing, sensing and networking technologies can enhance Boeing products and services for its customers, according to the statement.

“Quantum engineering is starting to make more meaningful progress and users are now asking for ways to experiment and explore the technology’s potential,” said Charlie Bell, senior vice president with AWS’s Utility Computing Services group.

AWS’s assumption going forward is quantum computing will be a cloud-first technology, which will be the way AWS will provide its users with their first quantum experience via Amazon Braket and the Quantum Solutions Lab.

Corporate and third-party developers can create their own customized algorithms with Braket, which gives them the option of executing either low-level quantum circuits or fully-managed hybrid algorithms. This makes it easier to choose between software simulators and whatever quantum hardware they select.

The AWS Center for Quantum Computing is based at Caltech, which has long invested in both experimental and theoretical quantum science and technology.

Go to Original Article
Author:

New Azure HPC and partner offerings at Supercomputing 19

For more than three decades, the researchers and practitioners that make up the high-performance computing (HPC) community will come together for their annual event. More than ten thousand strong in attendance, the global community will converge on Denver, Colorado to advance the state-of-the-art in HPC. The theme for Supercomputing ‘19 is “HPC is now” – a theme that resonates strongly with the Azure HPC team given the breakthrough capabilities we’ve been working to deliver to customers.

Azure is upending preconceptions of what the cloud can do for HPC by delivering supercomputer-grade technologies, innovative services, and performance that rivals or exceeds some of the most optimized on-premises deployments. We’re working to ensure Azure is paving the way for a new era of innovation, research, and productivity.

At the show, we’ll showcase Azure HPC and partner solutions, benchmarking white papers and customer case studies – here’s an overview of what we’re delivering.

  • Massive Scale MPI – Solve problems at the limits of your imagination, not the limits of other public cloud’s commodity network. Azure supports your tightly coupled workloads up to 80,000 cores per job featuring the latest HPC-grade CPUs and ultra-low latency InfiniBand hDR networking.
  • Accelerated Compute – Choose from the latest GPUs, field programmable gate array (FPGAs), and now IPUs for maximum performance and flexibility across your HPC, AI, and visualization workloads.
  • Apps and Services – Leverage advanced software and storage for every scenario: from hybrid cloud to cloud migration, from POCs to production, from optimized persistent deployments to agile environment reconfiguration. Azure HPC software has you covered.

Azure HPC unveils new offerings

  • The preview of new second gen AMD EPYC based HBv2 Azure Virtual Machines for HPC – HBv2 virtual machines (VMs) deliver supercomputer-class performance, message passing interface (MPI) scalability, and cost efficiency for a variety of real-world HPC workloads. HBv2 VMs feature 120 non-hyperthreaded CPU cores from the new AMD EPYC™ 7742 processor, up to 45 percent more memory bandwidth than x86 alternatives, and up to 4 teraFLOPS of double-precision compute. Leveraging the cloud’s first deployment of 200 Gigabit HDR InfiniBand from Mellanox, HBv2 VMs support up to 80,000 cores per MPI job to deliver workload scalability and performance that rivals some of the world’s largest and most powerful bare metal supercomputers. HBv2 is not just one of the most powerful HPC servers Azure has ever made, but also one of the most affordable. HBv2 VMs are available now.
  • The preview of new NVv4 Azure Virtual Machines for virtual desktops – NVv4 VMs enhance Azure’s portfolio of Windows Virtual Desktops with the introduction of the new AMD EPYC™ 7742 processor and the AMD RADEON INSTINCT™ MI25 GPUs. NVv4 gives customers more choice and flexibility by offering partitionable GPUs. Customers can select a virtual GPU size that is right for their workloads and price points, with as little as 2GB of dedicated GPU frame buffer for an entry-level desktop in the cloud, up to an entire MI25 GPU with 16GB of HBM2 memory to provide powerful workstation-class experience. NVv4 VMs are available now in preview.
  • The preview NDv2 Azure GPU Virtual Machines – NDv2 VMs are the latest, fastest, and most powerful addition to Azure GPU family, and are designed specifically for the most demanding distributed HPC, artificial intelligence (AI), and machine learning (ML) workloads. These VMs feature 8 NVIDIA Tesla V100 NVLink interconnected GPUs each with 32 GB of HBM2 memory, 40 non-hyperthreaded cores from the Intel Xeon Platinum 8168 processor, and 672 GiB of system memory. NDv2-series virtual machines also now feature 100 Gigabit EDR InfiniBand from Mellanox with support for standard OFED drivers and all MPI types and versions. NDv2-series virtual machines are ready for the most demanding machine learning models and distributed AI training workloads with out-of-box NCCL2 support for InfiniBand- allowing for easy scale-up to supercomputer-sized clusters that can run workloads utilizing CUDA, including popular ML tools and  frameworks. NDv2 VMs are available now in preview.
  • Azure HPC Cache now available – When it comes to file performance, the new Azure HPC Cache delivers flexibility and scale. Use this service right from the Azure Portal to connect high-performance computing workloads to on-premises network-attached storage or run Azure Blob as the portable operating system interface.
  • The preview of new NDv3 Azure Virtual Machines for AI –The preview of NDv3 VMs are Azure’s first offering featuring the Graphcore IPU, designed from the ground up for AI training and inference workloads. The IPU novel architecture enables high-throughput processing of neural networks even at small batch sizes, which accelerates training convergence and enables short inference latency. With the launch of NDv3, Azure is bringing the latest in AI silicon innovation to the public cloud and giving customers additional choice in how they develop and run their massive scale AI training and inference workloads. NDv3 VMs feature 16 IPU chips, each with over 1200 cores and over 7000 independent threads and a large 300MB on-chip memory that delivers up to 45 TB/s of memory bandwidth. The 8 Graphcore accelerator cards are connected through a high-bandwidth, low-latency interconnect enabling large models to be trained in a model-parallel (including pipelining) or data parallel way. An NDv3 VM also includes 40 cores of CPU, backed by Intel Xeon Platinum 8168 processors and 768 GB of memory. Customers can develop applications for IPU technology using Graphcore’s Poplar® software development toolkit, and leverage the IPU-compatible versions of popular machine learning frameworks. NDv3 VMs are now available in preview.
  • NP Azure Virtual Machines for HPC coming soon – Our Alveo U250 FPGA-Accelerated VMs offer from 1-to-4 Xilinx U250 FPGA devices as an Azure VM- backed by powerful Xeon Platinum CPU cores, and fast NVMe-based storage. The NP series will enable true lift-and-shift and single-target development of FPGA applications for a general purpose cloud. Based on a board and software ecosystem customers can buy today, RTL and high-level language designs targeted at Xilinx’s U250 card and SDAccel 2019.1 runtime will run on Azure VMs just as they do on-premises and on the edge, enabling the bleeding edge of accelerator development to harness the power of the cloud without additional development costs.

  • Azure CycleCloud 7.9 Update We are excited to announce the release of Azure CycleCloud 7.9. Version 7.9 focuses on improved operational clarity and control, in particular for large MPI workloads on Azure’s unique InfiniBand interconnected infrastructure. Among many other improvements, this release includes:

    • Improved error detection and reporting user interface (UI) that greatly simplify diagnosing VM issues.

    • ​Node time-to-live capabilities via a “Keep-alive” function, allowing users to build and debug MPI applications that are not affected by autoscaling policies.

    • VM placement group management through the UI that provides users direct control into node topology for latency sensitive applications.

    • Support for Ephemeral OS disks, which improve virtual machines and virtual machines scale sets start-up performance and cost.

  • Microsoft HPC Pack 2016, Update 3 – released in August, Update 3 includes significant performance and reliability improvements, support for Windows Server 2019 in Azure, a new VM extension for deploying Azure IaaS Windows nodes, and many other features, fixes, and improvements.

In all of our new offerings and alongside our partners, Azure HPC aims to consistently offer the latest in capabilities for HPC oriented use cases. Together with our partners Intel, AMD, Mellanox, NVIDIA, Graphcore, Xilinx, and many more, we look forward to seeing you next week in Denver!

Go to Original Article
Author: Microsoft News Center

Microsoft’s new approach to hybrid: Azure services when and where customers need them | Innovation Stories

As business computing needs have grown more complex and sophisticated, many enterprises have discovered they need multiple systems to meet various requirements – a mix of technology environments in multiple locations, known as hybrid IT or hybrid cloud.

Technology vendors have responded with an array of services and platforms – public clouds, private clouds and the growing edge computing model – but there hasn’t necessarily been a cohesive strategy to get them to work together.

We got here in an ad hoc fashion,” said Erik Vogel, global vice president for customer experience for HPE GreenLake at Hewlett Packard Enterprise. Customers didn’t have a strategic model to work from.

Instead, he said, various business owners in the same company may have bought different software as a service (SaaS) applications, or developers may have independently started leveraging Amazon Web Services, Azure or Google Cloud Platform to develop a set of applications.

At its Ignite conference this week in Orlando, Florida, Microsoft announced its solution to such cloud sprawl. The company has launched a preview of Azure Arc, which offers Azure services and management to customers on other clouds or infrastructure, including those offered by Amazon and Google.

John JG Chirapurath, general manager for Azure data, blockchain and artificial intelligence at Microsoft, said the new service is both an acknowledgement of, and a response to, the reality that many companies face today. They are running various parts of their businesses on different cloud platforms, and they also have a lot of data stored on their own new or legacy systems.

In all those cases, he said, these customers are telling Microsoft they could use the benefits of Azure cloud innovation whether or not their data is stored in the cloud, and they could benefit from having the same Azure capabilities – including security safeguards – available to them across their entire portfolio.

We are offering our customers the ability to take their services, untethered from Azure, and run them inside their own datacenter or in another cloud,” Chirapurath said.

Microsoft says Azure Arc builds on years of work the company has done to serve hybrid cloud needs. For example, Azure Resource Manager, released in 2014, was created with the vision that it would manage resources outside of Azure, including in companies’ internal servers and on other clouds.

That flexibility can help customers operate their services on a mix of clouds more efficiently, without purchasing new hardware or switching among cloud providers. Companies can use a public cloud to obtain computing power and data storage from an outside vendor, but they can also house critical applications and sensitive data on their own premises in a private cloud or server.

Then there’s edge computing, which stores data where the user is, in between the company and the public cloud for example, on their customers’ mobile devices or on sensors in smart buildings like hospitals and factories.

YouTube Video

That’s compelling for companies that need to run AI models on systems that aren’t reliably connected to the cloud, or to make computations more quickly than if they had to send large amounts of data to and from the cloud. But it also must work with companies’ cloud-based, internet-connected systems.

“A customer at the edge doesn’t want to use different app models for different environments,” said Mark Russinovich, Azure chief technology officer. “They need apps that span cloud and edge, leveraging the same code and same management constructs.”

Streamlining and standardizing a customer’s IT structure gives developers more time to build applications that produce value for the business instead of managing multiple operating models. And enabling Azure to integrate administrative and compliance needs across the enterprise – automating system updates and security enhancements brings additional savings in time and money.

“You begin to free up people to go work on other projects, which means faster development time, faster time to market,” said HPE’s Vogel. HPE is working with Microsoft on offerings that will complement Azure Arc.

Arpan Shah, general manager of Azure infrastructure, said Azure Arc allows companies to use Azure’s governance tools for their virtual machines, Kubernetes clusters and data across different locations, helping ensure companywide compliance on things like regulations, security, spending policies and auditing tools.

Azure Arc is underpinned in part by Microsoft’s commitment to technologies that customers are using today, including virtual machines, containers and Kubernetes, an open source system for organizing and managing containers. That makes clusters of applications easily portable across a hybrid IT environment – to the cloud, the edge or an internal server.

“It’s easy for a customer to put that container anywhere,” Chirapurath said. “Today, you can keep it here. Tomorrow, you can move it somewhere else.”

Microsoft says these latest Azure updates reflect an ongoing effort to better understand the complex needs of customers trying to manage their Linux and Windows servers, Kubernetes clusters and data across environments.

“This is just the latest wave of this sort of innovation,” Chirapurath said. “We’re really thinking much more expansively about customer needs and meeting them according to how they’d like to run their applications and services.”

Top image: Erik Vogel, global vice president for customer experience for HPE GreenLake at Hewlett Packard Enterprise, with a prototype of memory-driven computing. HPE is working with Microsoft on offerings that will complement Azure Arc. Photo by John Brecher for Microsoft.

Related:

Go to Original Article
Author: Microsoft News Center

Ecstasy programming language targets cloud-native computing

While recent events have focused on Java and how it will fare as computing continues to evolve to support modern platforms and technologies, a new language is targeted directly at the cloud-native computing space — something Java continues to adjust to.

This new language, known as the Ecstasy programming language, aims to address programming complexity and to enhance security and manageability in software, which are key challenges for cloud app developers.

Oracle just completed its Oracle Open World and Oracle Code One conferences, where Java was dominant. Indeed, Oracle Code One was formerly known as JavaOne until last year, when Oracle changed its name to be more inclusive of other languages.

Ironically, Cameron Purdy, a former senior vice president of development at Oracle and now CEO of Xqiz.it (pronounced “exquisite”), based in Lexington, Mass., is the co-creator of the Ecstasy language. Purdy joined Oracle in 2007, when the database giant acquired his previous startup, Tangosol, to attain its Coherence in-memory data grid technology, which remains a part of Oracle’s product line today.

Designed for containerization and the cloud-native computing era

Purdy designed Ecstasy for what he calls true containerization. It will run on a server, in a VM or in an OS container, but that is not the kind of container that Ecstasy containerization refers to. Ecstasy containers are a feature of the language itself, and they are secure, recursive, dynamic and manageable runtime containers, he said.

For security, all Ecstasy code runs inside an Ecstasy container, and Ecstasy code cannot even see the container it’s running inside of — let alone anything outside that container, like the OS, or even another container. Regarding recursivity, Ecstasy code can create nested containers inside the current container, and the code running inside those containers can create their own containers, and so on. For dynamism, containers can be created and destroyed dynamically, but they also can grow and shrink within a common, shared pool of CPU and memory resources. For manageability, any resources — including CPU, memory, storage and any I/O — consumed by an Ecstasy container can be measured and managed in real time. And all the resources within a container — including network and storage — can be virtualized, with the possibility of each container being virtualized in a completely different manner.

Overall, the goal of Ecstasy is to solve a set of problems that are intrinsic to the cloud:

  • the ability to modularize application code, so that some portions could be run all the way out on the client, or all the way back in the heart of a server cluster, or anywhere in-between — including on shared edge and CDN servers;
  • to make code that is portable and reusable across all those locations and devices;
  • to be able to securely reuse code by supporting the secure containerization of arbitrary modules of code;
  • to enable developers to manage and virtualize the resources used by this code to enhance security, manageability, real-time monitoring and cloud portability; and
  • to provide an architecture that would scale with the cloud but could also scale with the many core devices and specialized processing units that lie at the heart of new innovation — like machine learning.

General-purpose programming language

Ecstasy, like C, C++, Java, C# and Python, is a general-purpose programming language — but its most compelling feature is not what it contains, but rather what it purposefully omits, Purdy said.

For instance, all the aforementioned general-purpose languages adopted the underlying hardware architecture and OS capabilities as a foundation upon which they built their own capabilities, but additionally, these languages all exposed the complexity of the underlying hardware and OS details to the developer. This not only added to complexity, but also provided a source of vulnerability and deployment inflexibility.

As a general-purpose programming language, Ecstasy will be useful for most application developers, Purdy said. However, Xqiz.it is still in “stealth” mode as a company and in the R&D phase with the language. Its design targets all the major client device hardware and OSes, all the major cloud vendors, and all of the server back ends.

“We designed the language to be easy to pick up for anyone who is familiar with the C family of languages, which includes Java, C# and C++,” he said. “Python and JavaScript developers are likely to recognize quite a few language idioms as well.”

Ecstasy is not a superset of Java, but [it] definitely [has] a large syntactic intersection. Ecstasy adds lots and lots onto Java to improve both developer productivity, as well as program correctness.
Mark FalcoSenior principal software development engineer, Workday

Ecstasy is heavily influenced by Java, so Java programmers should be able to read lots of Ecstasy code without getting confused, said Mark Falco, a senior principal software development engineer at Workday who has had early access to the software.

“To be clear, Ecstasy is not a superset of Java, but [it] definitely [has] a large syntactic intersection,” Falco said. “Ecstasy adds lots and lots onto Java to improve both developer productivity, as well as program correctness.” The language’s similarity to Java also should help with developer adoption, he noted.

However, Patrick Linskey, a principal engineer at Cisco and another early Ecstasy user, said, “From what I’ve seen, there’s a lot of Erlang/OTP in there under the covers, but with a much more accessible syntax.” Erlang/OTP is a development environment for concurrent programming.

Falco added, “Concurrent programming in Ecstasy doesn’t require any notion of synchronization, locking or atomics; you always work on your local copy of a piece of data, and this makes it much harder to screw things up.”

Compactness, security and isolation

Moreover, a few key reasons for creating a new programming language for serverless, cloud and connected devices apps are their compactness, security and isolation, he added.

“Ecstasy starts off with complete isolation at its core; an Ecstasy app literally has no conduit to the outside world, not to the network, not to the disk, not to anything at all,” Falco said. “To gain access to any aspect of the outside world, an Ecstasy app must be injected with services that provide access to only a specific resource.”

“The Ecstasy runtime really pushes developers toward safe patterns, without being painful,” Linskey said. “If you tried to bolt an existing language onto such a runtime, you’d end up with lots of tough static analysis checks, runtime assertions” and other performance penalties.

Indeed, one of the more powerful components of Ecstasy is the hard separation of application logic and deployment, noted Rob Lee, another early Ecstasy user who is vice president and chief architect at Pure Storage in Mountain View, Calif. “This allows developers to focus on building the logic of their application — what it should do and how it should do it, rather than managing the combinatorics of details and consequences of where it is running,” he noted.

What about adoption?

However, adoption will be the “billion-dollar” issue for the Ecstasy programming language, Lee said, noting that he likes the language’s chances based on what he’s seen. Yet, building adoption for a new runtime and language requires a lot of careful and intentional community-building.

Cisco is an easy potential candidate for Ecstasy usage, Linskey said. “We build a lot of middlebox-style services in which we pull together data from a few databases and a few internal and external services and serve that up to our clients,” he said. “An asynchronous-first runtime with the isolation and security properties of Ecstasy would be a great fit for us.”

Meanwhile, Java aficionados expect that Java will continue to evolve to meet cloud-native computing needs and future challenges. At Oracle Code One, Stewart Bryson, CEO of Red Pill Analytics in Atlanta, said he believes Java has another 10 to 20 years of viability, but there is room for another language that will better enable developers for the cloud. However, that language could be one that runs on the Java Virtual Machine, such as Kotlin, Scala, Clojure and others, he said.

Go to Original Article
Author:

Analyzing data from space – the ultimate intelligent edge scenario – The Official Microsoft Blog

Space represents the next frontier for cloud computing, and Microsoft’s unique approach to partnerships with pioneering companies in the space industry means together we can build platforms and tools that foster significant leaps forward, helping us gain deeper insights from the data gleaned from space.

One of the primary challenges for this industry is the sheer amount of data available from satellites and the infrastructure required to bring this data to ground, analyze the data and then transport it to where it’s needed. With almost 3,000 new satellites forecast to launch by 20261 and a threefold increase in the number of small satellite launches per year, the magnitude of this challenge is growing rapidly.

Essentially, this is the ultimate intelligent edge scenario – where massive amounts of data must be processed at the edge – whether that edge is in space or on the ground. Then the data can be directed to where it’s needed for further analytics or combined with other data sources to make connections that simply weren’t possible before.

DIU chooses Microsoft and Ball Aerospace for space analytics

To help with these challenges, the Defense Innovation Unit (DIU) just selected Microsoft and Ball Aerospace to build a solution demonstrating agile cloud processing capabilities in support of the U.S. Air Force’s Commercially Augmented Space Inter Networked Operations (CASINO) project.

With the aim of making satellite data more actionable more quickly, Ball Aerospace and Microsoft teamed up to answer the question: “what would it take to completely transform what a ground station looks like, and downlink that data directly to the cloud?”

The solution involves placing electronically steered flat panel antennas on the roof of a Microsoft datacenter. These phased array antennas don’t require much power and need only a couple of square meters of roof space. This innovation can connect multiple low earth orbit (LEO) satellites with a single antenna aperture, significantly accelerating the delivery rate of data from satellite to end user with data piped directly into Microsoft Azure from the rooftop array.

Analytics for a massive confluence of data

Azure provides the foundational engine for Ball Aerospace algorithms in this project, processing worldwide data streams from up to 20 satellites. With the data now in Azure, customers can direct that data to where it best serves the mission need, whether that’s moving it to Azure Government to meet compliance requirements such as ITAR or combining it with data from other sources, such as weather and radar maps, to gain more meaningful insights.

In working with Microsoft, Steve Smith, Vice President and General Manager, Systems Engineering Solutions at Ball Aerospace called this type of data processing system, which leverages Ball phased array technology and imagery exploitation algorithms in Azure, “flexible and scalable – designed to support additional satellites and processing capabilities. This type of data processing in the cloud provides actionable, relevant information quickly and more cost-effectively to the end user.”

With Azure, customers gain its advanced analytics capabilities such as Azure Machine Learning and Azure AI. This enables end users to build models and make predictions based on a confluence of data coming from multiple sources, including multiple concurrent satellite feeds. Customers can also harness Microsoft’s global fiber network to rapidly deliver the data to where it’s needed using services such as ExpressRoute and ExpressRoute Global Reach. In addition, ExpressRoute now enables customers to ingest satellite data from several new connectivity partners to address the challenges of operating in remote locations.

For tactical units in the field, this technology can be replicated to bring information to where it’s needed, even in disconnected scenarios. As an example, phased array antennas mounted to a mobile unit can pipe data directly into a tactical datacenter or Data Box Edge appliance, delivering unprecedented situational awareness in remote locations.

A similar approach can be used for commercial applications, including geological exploration and environmental monitoring in disconnected or intermittently connected scenarios. Ball Aerospace specializes in weather satellites, and now customers can more quickly get that data down and combine it with locally sourced data in Azure, whether for agricultural, ecological, or disaster response scenarios.

This partnership with Ball Aerospace enables us to bring satellite data to ground and cloud faster than ever, leapfrogging other solutions on the market. Our joint innovation in direct satellite-to-cloud communication and accelerated data processing provides the Department of Defense, including the Air Force, with entirely new capabilities to explore as they continue to advance their mission.

  1. https://www.satellitetoday.com/innovation/2017/10/12/satellite-launches-increase-threefold-next-decade/

Tags: ,

Go to Original Article
Author: Microsoft News Center

Learn at your own pace with Microsoft Quantum Katas

For those who want to explore quantum computing and learn the Q# programming language at their own pace, we have created the Quantum Katas – an open source project containing a series of programming exercises that provide immediate feedback as you progress.

Coding katas are great tools for learning a programming language. They rely on several simple learning principles: active learning, incremental complexity growth, and feedback.

The Microsoft Quantum Katas are a series of self-paced tutorials aimed at teaching elements of quantum computing and Q# programming at the same time. Each kata offers a sequence of tasks on a certain quantum computing topic, progressing from simple to challenging. Each task requires you to fill in some code; the first task might require just one line, and the last one might require a sizable fragment of code. A testing framework validates your solutions, providing real-time feedback.

Working with the Quantum Katas in Visual Studio
Working with the Quantum Katas in Visual Studio

Programming competitions are another great way to test your quantum computing skills. Earlier this month, we ran the first Q# coding contest and the response was tremendous. More than 650 participants from all over the world joined the contest or the warmup round held the week prior. More than 350 contest participants solved at least one problem, while 100 participants solved all fifteen problems! The contest winner solved all problems in less than 2.5 hours. You can find problem sets for the warmup round and main contest by following the links below. The Quantum Katas include the problems offered in the contest, so you can try solving them at your own pace.

We hope you find the Quantum Katas project useful in learning Q# and quantum computing. As we work on expanding the set of topics covered in the katas, we look forward to your feedback and contributions!

The cloud-enabled workforce: Prepping IT staff for success

In the world of IT, change is the new normal. Consider cloud computing: The cloud is not just changing how IT works, but also driving a shift in desired IT skill sets.

Modern-day IT staff needs to acquire a new set of domain skills for process automation, architecture, resource optimization and cost management to drive cloud-based initiatives, according to a recent guide published by the Cloud Standards Customer Council about how to help grow and develop a cloud-enabled workforce.

Fostering a cloud-enabled workforce also helps drive agility, efficiency and transformation, said Jeff Boleman, cloud lead and cognitive architect at IBM and a contributor to the guide. Boleman cited a CEB survey that forecasted a dramatic increase in the need for security and cloud hosting skills, making the development of a skilled, cloud-enabled workforce more pertinent than ever.

“It’s really about training, because, at the end of the day, you are owning what you do as a company; you need to be responsible for it and take charge of it,” he said during a recent webinar.

Developing a framework for cloud skills training: Six steps

Any strategy for ongoing cloud skills training should ideally align with the organization’s cloud transformation plans or an existing cloud strategy, said Lisa Schenkewitz, executive IT architect at IBM and a co-contributor to the guide. 

The first step to develop a cloud-enabled workforce requires understanding the existing culture. This necessitates an awareness of organizational values and the way people interact, as well as a basic understanding of what it is like to actually work in the organization, Schenkewitz explained.

“You can use that knowledge as a framework to understand other aspects of the company, such as how easy is it to change IT processes … and [get] a basic understanding of what it is to be on the cloud on the leadership level,” Schenkewitz said.

The next step is to understand the skills that are needed. Traditional IT skills might not be required in the new cloud environment, she said. Non-IT skills that might prove valuable include contract management, business process change and accounting experience, as well as domain knowledge for operations in the cloud, like DevSecOps, IT frameworks and IT governance processes, she said.

Adopt a consistent, intentional program to communicate, celebrate, sustain and embrace the habits of ongoing skills training.
Lisa Schenkewitzexecutive IT architect at IBM

Step three is to understand the organization’s existing skills and where the gaps exist. When talking to different team members, consider questions such as what existing skills can be used, whether there is an opportunity or desire for retraining and what skills are missing entirely, she suggested.

Organizations can then move on to understand and identify what needs to be remediated in order to be successful. Once a complete examination of the skills and process gaps is done, a plan can be devised to remediate them, she said.

Remediation planning and execution comes next. Schenkewitz advised looking for ways to effectively engage current team members and develop ways to attract new talents and skills, like offering internships and apprenticeships.

The final step is to be ready to embrace change.

“It’s going to be an ongoing rollercoaster ride,” she said. “But the most important thing to remember is to adopt a consistent, intentional program to communicate, celebrate, sustain and embrace the habits of ongoing skills training.”

Best practices for cloud skills training

William Van Order, fellow emeritus, Lockheed MartinWilliam Van Order

Letting IT take a key role in crafting some of the cloud education programs is essential, according to William Van Order, computer systems architect and fellow emeritus at Lockheed Martin in Bethesda, Md.

Apart from taking advantage of the wide range of knowledge-sharing tools in existence, he encouraged companies to “leverage as much of what you find out of the box in terms of training content,” but customize that basic training to fit their needs, added Van Order, who also led the development of the guide.

Recognizing the accomplishment of those who are willing to take control of their career development is also paramount, he said.

“Make sure that you integrate learning and knowledge training objectives into the performance objectives of your key staff so that you can measure that and recognize that,” he said. “That really sends a positive message to the workforce.”

Hyper-converged infrastructure solutions: Scale, Nutanix tap channel

Scale Computing and Nutanix both made channel partner moves this week in the hyper-converged infrastructure solutions market.

Scale Computing launched a managed service provider (MSP) program with an updated pricing model. The new Opex subscription is based on price per node, per month. The company said the MSP pricing model aims to boost partner profitability and reduce Capex.

The MSP program also provides features that Scale Computing said enables partners to sell disaster recovery as a service, infrastructure services, and remote management as a service. The company said the new channel initiative follows increased demand for Scale Computing’s HC3 platform in the hyper-converged infrastructure solutions space.

MSPs can purchase HC3 appliances based on Scale Computing and Lenovo hardware or as an “on-premises data center in a box,” according to the company.

Nutanix, meanwhile, has teamed up with Lenovo to launch the Velocity partner program for selling Nutanix Enterprise Cloud OS software in the midmarket space.

The program features incentives, marketing support, accelerated selling processes and product bundles based on Lenovo’s HX hyper-converged appliance, Nutanix said. Nutanix and Lenovo will also release a new hyper-converged product, Lenovo ThinkAgile HX Certified Nodes, targeting enterprise customers.

Other hyper-convergence vendors are targeting MSPs and resellers in their channel partnership strategy plans. Pivot3 earlier this year reported a more than 65% sales increase from the first half of 2017 to the second half of that year.

Hyper-converged infrastructure solutions, meanwhile, have become an important technology for MSPs and other channel partners. The benefits of hyper-converged offerings have expanded beyond initial use cases such as virtual desktop infrastructure to other applications such as virtual machine clustering and production-side deployments. In addition, channel partner executives identified hyper-converged infrastructure solutions as among the technologies setting the pace this year in the storage market.

Chart showing hyper-converged benefits, challenges and use cases
Channel partners are seeing increased demand as more CIOs adopt hyper-converged infrastructure solutions.

Commvault: Changes ahead for channel strategy

Data management vendor Commvault has established a three-year plan to revamp its channel program and strategy.   

The changes come after Commvault’s appointment of Scott Strubel, vice president of worldwide channels, in late April of this year. Strubel joined Commvault from NetApp, where he served as vice president of the Americas partner organization. According to Strubel, Commvault’s long-term channel refresh will focus on four pillars — predictability, consistency and profitability, combined with simplicity — with several updates rolling out in the coming months.

“We will soon be making announcements on what our partners are going to see in the new partner program,” Strubel said.

Commvault partners can expect to see significant changes to the pricing and packaging of products, he noted. Changes include plans to decrease the number of SKUs, as well as parts required to build solutions. Additionally, Commvault will introduce new sales and technical enablement resources. “We will make multiple system-based and people-based venues available to our partners to get better enabled on selling the newly simplified Commvault solutions and taking them to market,” he said.

Strubel also revealed plans for more investments in demand generation, bringing “more leads to our partners in the coming year than we have brought in any year prior.”

Lifesize retools partner program

Lifesize, a video collaboration vendor based in Austin, Texas, has revamped its channel partnership strategy in an effort to boost partner margins.

The Lifesize partner program now offers a two-tiered incentive structure. The first tier rewards distributors for finding net new resellers for Lifesize, while the second tier rewards distributors as their existing resellers move up in the Lifesize program, from Silver to Gold status, for example.

“It’s not enough to just find new partners. We need all partners to grow their Lifesize business,” said Tim Maloney, senior vice president of worldwide channels at Lifesize. The company currently has more than 1,500 partners.

Maloney said partners will play a central role in launching Lifesize Dash, a recently announced software-based collaboration offering for small meeting spaces. The product, priced at less than $1,000, will be available in the third quarter of 2018.

Other news

  • Ensono, a hybrid IT services provider, closed on its purchase of Wipro Ltd.’s hosted data center services business in the U.S., Europe and Singapore. Acquisition activity among cloud services and hosting providers has intensified in recent months.
  • Twilio, a cloud communications platform company, launched Twilio Build, a channel program that offers go-to-market support, training and certification, and a partner success team. Twilio said the program has two partner tiers — Registered and Gold — as well as a marketplace where partners can showcase their Twilio offerings.
  • SolarWinds MSP, an IT service management solution provider, unveiled MSP Pulse, a benchmarking tool for MSPs. The tool was developed in partnership with The 2112 Group.
  • JASK, a security operations center platform vendor, said it has raised $25 million in Series B funding. The company said the funding round, led by Kleiner Perkins, will let the company “expand global sales channels,” increase hiring and focus on platform development. JASK launched a channel partnership strategy and program earlier this year.
  • CloudJumper, a workspace-as-a-service platform vendor, is partnering with Synoptek, an MSP. Synoptek will private label CloudJumper’s cloud workspace platform and streaming app services.
  • Tech Data signed a distributor agreement with Omnicharge, a power-source vendor, to provide its multiport power bank and power station products. Omnicharge said it products come with one-year limited warranty and lifetime customer support.
  • Atmosera, a Microsoft Cloud Solution Provider, appointed Ellie Soleymani as director of marketing and Mark Lipscomb as director of customer success.

Unchecked cloud IoT costs can quickly spiral upward

The convergence of IoT and cloud computing can tantalize enterprises that want to delve into new technology, but it’s potentially a very pricey proposition.

Public cloud providers have pushed heavily into IoT, positioning themselves as a hub for much of the storage and analysis of data collected by these connected devices. Managed services from AWS, Microsoft Azure and others make IoT easy to initiate, but users who don’t properly configure their workloads quickly encounter runaway IoT costs.

Cost overruns on public cloud deployments are nothing new, despite lingering perceptions that these platforms are always a cheaper alternative to private data centers. But IoT architectures are particularly sensitive to metered billing because of the sheer volume of data they produce. For example, a connected device in a factory setting could generate hundreds of unique streams of data every few milliseconds that record everything from temperatures to acoustics. That much data could add up to a terabyte of data being uploaded daily to cloud storage.

“The amount of data you transmit and store and analyze is potentially infinite,” said Ezra Gottheil, an analyst at Technology Business Research Inc. in Hampton, N.H. “You can measure things however often you want. And if you measure it often, the amount of data grows without bounds.”

Users must also consider networking costs. Most large cloud vendors charge based on communications between the device and their core services. And in typical public cloud fashion, each vendor charges differently for those services.

Predictive analytics reveals, compares IoT costs

To parse the complexity and scale of potential IoT cost considerations, analyst firm 451 Research built a Python simulation and applied predictive analytics to determine costs for 10 million IoT workload configurations. It found Azure was largely the least-expensive option — particularly if resources were purchased in advance — though AWS could be cheaper on deployments with fewer than 20,000 connected devices. It also illuminated how vast pricing complexities hinder straightforward cost comparisons between providers.

In a VM, you have a comparison with dedicated servers from before. But with IoT, it’s a whole new world.
Owen Rogersanalyst, 451 Research

For example, Google charges in terms of data transferred, while AWS and Azure charge against the number of messages sent. Yet, AWS and Azure treat messages differently, which can also affect IoT costs; Microsoft caps the size of a message, potentially requiring a customer to send multiple messages.

There are other unexpected charges, said Owen Rogers, a 451 analyst. Google, for example, charges for ping messages, which check that the connection is kept alive. That ping may only be 64 bytes, but Google rounds up to the kilobyte. So, customers essentially pay for unused capacity.

“Each of these models has nuances, and you only really discover them when you look through the terms and conditions,” Rogers said.

Some of these nuances aim to protect the provider or hide complexity from the users, but users may scratch their heads. Charging discrepancies are endemic to the public cloud, but IoT costs present new challenges for those deciding which cloud to use — especially those who start out with no past experience as a reference point.

“How are you going to say it’s less or more than it was before? At least in a VM, you have a comparison with dedicated servers from before. But with IoT, it’s a whole new world,” Rogers said. “If you want to compare providers, it would be almost impossible to do manually.”

There are many unknowns to building an IoT deployment compared to more traditional applications, some of which apply regardless of whether it’s built on the public cloud or in a private data center. Software asset management can be a huge cost at scale. In the case of a connected factory or building, greater heterogeneity affects time and cost, too.

“Developers really need to understand the environment, and they have to be able to program for that environment,” said Alfonso Velosa, a Gartner analyst. “You would set different protocols, logic rules and processes when you’re in the factory for a robot versus a man[-operated] machine versus the air conditioners.”

Data can also get stale rather quickly and, in some cases, become useless, if it’s not used within seconds. Companies must put policies in place to make sure they understand how frequently to record data and transmit the appropriate amount of data back to the cloud. That includes when to move data from active storage to cold storage and if and when to completely purge those records.

“It’s really sitting down and figuring out, ‘What’s the value of this data, and how much do I want to collect?'” Velosa said. “For a lot of folks, it’s still not clear where that value is.”