Tag Archives: Cloud

VMware’s Bitnami acquisition grows its development portfolio

The rise of containers and the cloud has changed the face of the IT market, and VMware must evolve with it. The vendor has moved out of its traditional data center niche and — with its purchase of software packager Bitnami — has made a push into the development community, a change that presents new challenges and potential. 

Historically, VMware delivered a suite of system infrastructure management tools. With the advent of cloud and digital disruption, IT departments’ focus expanded from monitoring systems to developing applications. VMware has extended its management suite to accommodate this shift, and its acquisition of Bitnami adds new tools that ease application development.

Building applications presents difficulties for many organizations. Developers spend much of their time on application plumbing, writing software that performs mundane tasks — such as storage allocation — and linking one API to another.

Bitnami sought to simplify that work. The company created prepackaged components called installers that automate the development process. Rather than write the code themselves, developers can now download Bitnami system images and plug them into their programs. As VMware delves further into hybrid cloud market territory, Bitnami brings simplified app development to the table.

Torsten Volk, managing research director at Enterprise Management AssociatesTorsten Volk

“Bitnami’s solutions were ahead of their time,” said Torsten Volk, managing research director at Enterprise Management Associates (EMA), a computer consultant based out of Portsmouth, New Hampshire. “They enable developers to bulletproof application development infrastructure in a self-service manner.”

The value Bitnami adds to VMware

Released under the Apache License, Bitnami’s modules contain commonly coupled software applications instead of just bare-bones images. For example, a Bitnami WordPress stack might contain WordPress, a database management system (e.g., MySQL) and a web server (e.g., Apache).

Bitnami takes care of several mundane programming chores. Its keeps all components up-to-date — so if it finds a security problem, it patches that problem — and updates those components’ associated libraries. Bitnami makes its modules available through its Application Catalogue, which functions like an app store.

The company designed its products to run on a wide variety of systems. Bitnami supports Apple OS X, Microsoft Windows and Linux OSes. Its VM features work with VMware ESX and ESXi, VirtualBox and QEMU. Bitnami stacks also are compatible with software infrastructures such as WAMP, MAMP, LAMP, Node.js, Tomcat and Ruby. It supports cloud tools from AWS, Azure, Google Cloud Platform and Oracle Cloud. The installers, too, feature a wide variety of platforms, including Abante Cart, Magento, MediaWiki, PrestaShop, Redmine and WordPress. 

Bitnami seeks to help companies build applications once and run them on many different configurations.

“For enterprise IT, we intend to solve for challenges related to taking a core set of application packages and making them available consistently across teams and clouds,” said Milin Desai, general manager of cloud services at VMware.

Development teams share project work among individuals, work with code from private or public repositories and deploy applications on private, hybrid and public clouds. As such, Bitnami’s flexibility made it appealing to developers — and VMware.

How Bitnami and VMware fit together

[VMware] did not pay a premium for the products, which were not generating a lot of revenue. Instead, they wanted the executives, who are all rock stars in the development community.
Torsten VolkManaging Research Director, EMA

VMware wants to extend its reach from legacy, back-end data centers and appeal to more front-end and cloud developers.

“In the last few years, VMware has gone all in on trying to build out a portfolio of management solutions for application developers,” Volk said. VMware embraced Kubernetes and has acquired container startups such as Heptio to prove it.

Bitnami adds another piece to this puzzle, one that provides a curated marketplace for VMware customers who hope to emphasize rapid application development.

“Bitnami’s application packaging capabilities will help our customers to simplify the consumption of applications in hybrid cloud environments, from on-premises to VMware Cloud on AWS to VMware Cloud Provider Program partner clouds, once the deal closes,” Desai said.

Facing new challenges in a new market

However, the purchase moves VMware out of its traditional virtualized enterprise data center sweet spot. VMware has little name recognition among developers, so the company must build its brand.

“Buying companies like Bitnami and Heptio is an attempt by VMware to gain instant credibility among developers,” Volk said. “They did not pay a premium for the products, which were not generating a lot of revenue. Instead, they wanted the executives, who are all rock stars in the development community.”  

Supporting a new breed of customer poses its challenges. Although VMware’s Bitnami acquisition adds to its application development suite — an area of increasing importance — it also places new hurdles in front of the vendor. Merging the culture of a startup with that of an established supplier isn’t always a smooth process. In addition, VMware has bought several startups recently, so consolidating its variety of entities in a cohesive manner presents a major undertaking.

Go to Original Article
Author:

Dremio Data Lake Engine 4.0 accelerates query performance

Dremio is advancing its technology with a new release that supports AWS, Azure and hybrid cloud deployments, providing what the vendor refers to as a Data Lake Engine.

The Dremio Data Lake Engine 4.0 platform is rooted in multiple open source projects, including Apache Arrow, and offers the promise of accelerated query performance for data lake storage.

Dremio made the platform generally available on Sept. 17. The Dremio Data Lake Engine 4.0 update introduces a feature called column-aware predictive pipelining that helps predict access patterns, which makes queries faster. The new Columnar Cloud Cache (C3) feature in Dremio also boosts performance by caching data closer to where compute execution occurs.

For IDC analyst Stewart Bond, the big shift in the Dremio 4.0 update is how the data lake engine vendor has defined its offering as a “Data Lake Engine” focused on AWS and Azure.

In some ways, Dremio had previously struggled to define what its technology actually does, Bond said. In the past, Dremio had been considered a data preparation tool, a data virtualization tool and even a data integration tool, he said. It does all those things, but in ways, and with data, that differ markedly from traditional technologies in the data integration software market.

“Dremio offers a semantic layer, query and acceleration engine over top of object store data in AWS S3 or Azure, plus it can also integrate with more traditional relational database technologies,” Bond said. “This negates the need to move data out of object stores and into a data warehouse to do analytics and reporting.”

For data in a data lake to be valuable, it typically needs to be extracted, refined and delivered to data warehouses, analytics, machine learning, or operational applications where it can also be transformed into something different when blended with other data ingredients.
Stewart BondAnalyst, IDC

Simply having a data lake doesn’t do much for an organization. A data lake is just data, and just as with natural lakes, water needs to be extracted, refined and delivered for consumption, Bond said.

“For data in a data lake to be valuable, it typically needs to be extracted, refined and delivered to data warehouses, analytics, machine learning or operational applications where it can also be transformed into something different when blended with other data ingredients,” Bond said. “Dremio provides organizations with the opportunity to get value out of data in a data lake without having to move the data into another repository, and can offer the ability to blend it with data from other sources for new insights.”

How Dremio Data Lake Engine 4.0 works

Organizations use technologies like ETL (extract, transform, load), among other things, to move data from data lake storage into a data warehouse because they can’t query the data fast enough where it is, said Tomer Shiran, co-founder and CTO of Dremio. That performance challenge is one of the drivers behind the C3 feature in Dremio 4.

“With C3 what we’ve developed is a patent pending real-time distributed cache that takes advantage of the NVMe devices that are on the instances that we’re running on to automatically cache data from S3,” Shiran explained. “So when the query engine is accessing a piece of data for the second time, it’s at least 10 times faster than getting it directly from S3.”

Screenshot of Dremio data lake architecture
Dremio data lake architecture

The new column-aware predictive pipelining feature in Dremio Data Lake Engine 4.0 further accelerates query performance for the initial access. The features increases data read throughput to the maximum that is allowed on a given network, Shiran explained.

While Dremio is positioning its technology as a data lake engine that can be used to query data stored in a data lake, Shiran noted that the platform also has data virtualization capabilities. With data virtualization, pointers or links to sources of data enables creating a logical data layer.

Apache Arrow

One of the foundational technologies that enables the Dremio Data Lake Engine is the open source Apache Arrow project, which Shiran helped to create.

“We took the internal memory format of Dremio, and we open sourced that as Apache Arrow, with the idea that we wanted our memory format to be an industry standard,” Shiran said.

Arrow has become increasingly popular over the past three years and is now used by many different tools, including Apache Spark.

With the growing use of Arrow, Dremio’s goal is to make communications between its platform and other tools that use Arrow as fast as possible. Among the ways that Dremio is helping to make Arrow faster is with the Gandiva effort that is now built into Dremio 4, according to the vendor. Gandiva is an execution kernel that is based on the LLVM compiler, enabling real-time code compilation to accelerate queries.

Dremio will continue to work on improving performance, Shiran said.

“At the end of the day, customers want to see more and more performance, and more data sources,” he said. “We’re also making it more self-service for users, so for us we’re always looking to reduce friction and the barriers.”

Go to Original Article
Author:

Analyzing data from space – the ultimate intelligent edge scenario – The Official Microsoft Blog

Space represents the next frontier for cloud computing, and Microsoft’s unique approach to partnerships with pioneering companies in the space industry means together we can build platforms and tools that foster significant leaps forward, helping us gain deeper insights from the data gleaned from space.

One of the primary challenges for this industry is the sheer amount of data available from satellites and the infrastructure required to bring this data to ground, analyze the data and then transport it to where it’s needed. With almost 3,000 new satellites forecast to launch by 20261 and a threefold increase in the number of small satellite launches per year, the magnitude of this challenge is growing rapidly.

Essentially, this is the ultimate intelligent edge scenario – where massive amounts of data must be processed at the edge – whether that edge is in space or on the ground. Then the data can be directed to where it’s needed for further analytics or combined with other data sources to make connections that simply weren’t possible before.

DIU chooses Microsoft and Ball Aerospace for space analytics

To help with these challenges, the Defense Innovation Unit (DIU) just selected Microsoft and Ball Aerospace to build a solution demonstrating agile cloud processing capabilities in support of the U.S. Air Force’s Commercially Augmented Space Inter Networked Operations (CASINO) project.

With the aim of making satellite data more actionable more quickly, Ball Aerospace and Microsoft teamed up to answer the question: “what would it take to completely transform what a ground station looks like, and downlink that data directly to the cloud?”

The solution involves placing electronically steered flat panel antennas on the roof of a Microsoft datacenter. These phased array antennas don’t require much power and need only a couple of square meters of roof space. This innovation can connect multiple low earth orbit (LEO) satellites with a single antenna aperture, significantly accelerating the delivery rate of data from satellite to end user with data piped directly into Microsoft Azure from the rooftop array.

Analytics for a massive confluence of data

Azure provides the foundational engine for Ball Aerospace algorithms in this project, processing worldwide data streams from up to 20 satellites. With the data now in Azure, customers can direct that data to where it best serves the mission need, whether that’s moving it to Azure Government to meet compliance requirements such as ITAR or combining it with data from other sources, such as weather and radar maps, to gain more meaningful insights.

In working with Microsoft, Steve Smith, Vice President and General Manager, Systems Engineering Solutions at Ball Aerospace called this type of data processing system, which leverages Ball phased array technology and imagery exploitation algorithms in Azure, “flexible and scalable – designed to support additional satellites and processing capabilities. This type of data processing in the cloud provides actionable, relevant information quickly and more cost-effectively to the end user.”

With Azure, customers gain its advanced analytics capabilities such as Azure Machine Learning and Azure AI. This enables end users to build models and make predictions based on a confluence of data coming from multiple sources, including multiple concurrent satellite feeds. Customers can also harness Microsoft’s global fiber network to rapidly deliver the data to where it’s needed using services such as ExpressRoute and ExpressRoute Global Reach. In addition, ExpressRoute now enables customers to ingest satellite data from several new connectivity partners to address the challenges of operating in remote locations.

For tactical units in the field, this technology can be replicated to bring information to where it’s needed, even in disconnected scenarios. As an example, phased array antennas mounted to a mobile unit can pipe data directly into a tactical datacenter or Data Box Edge appliance, delivering unprecedented situational awareness in remote locations.

A similar approach can be used for commercial applications, including geological exploration and environmental monitoring in disconnected or intermittently connected scenarios. Ball Aerospace specializes in weather satellites, and now customers can more quickly get that data down and combine it with locally sourced data in Azure, whether for agricultural, ecological, or disaster response scenarios.

This partnership with Ball Aerospace enables us to bring satellite data to ground and cloud faster than ever, leapfrogging other solutions on the market. Our joint innovation in direct satellite-to-cloud communication and accelerated data processing provides the Department of Defense, including the Air Force, with entirely new capabilities to explore as they continue to advance their mission.

  1. https://www.satellitetoday.com/innovation/2017/10/12/satellite-launches-increase-threefold-next-decade/

Tags: ,

Go to Original Article
Author: Microsoft News Center

Google Cloud tackles Spark on Kubernetes

An early version of a Google Cloud service that runs Apache Spark on Kubernetes is now available, but more work will be required to flesh out the container orchestration platform’s integrations with data analytics tools.

Kubernetes and containers haven’t been renowned for their use in data-intensive, stateful applications, including data analytics. But there are benefits to using Kubernetes as a resource orchestration layer under applications such as Apache Spark rather than the Hadoop YARN resource manager and job scheduling tool with which it’s typically associated. Developers and IT ops stand to gain advantages that containers bring to any application, such as portability across systems and consistency in configuration, along with automated provisioning and scaling for workloads that’s handled in the Kubernetes layer or by Helm charts, as well as container resource efficiency compared with virtual or bare metal machines.

“Analytical workloads, in particular, benefit from the ability to add rapidly scalable cloud capacity for spiky peak workloads, whereas companies might want to run routine, predictable workloads in a virtual private cloud,” said Doug Henschen, an analyst at Constellation Research in Cupertino, Calif. 

Google, which offers managed versions of Apache Spark and Apache Hadoop that run on YARN through its Cloud Dataproc service, would prefer to use its own Kubernetes platform to orchestrate resources — and to that end, released an alpha preview integration for Spark on Kubernetes within Cloud Dataproc this week. Other companies, such as Databricks (run by the creators of Apache Spark) and D2iQ (formerly Mesosphere), support Spark on Kubernetes, but Google Cloud Dataproc stands to become the first of the major cloud providers to include it in a managed service.

Customers don’t care about managing Hive or Pig, and want to use Kubernetes in hybrid clouds.
James MaloneProduct manager, Google Cloud Dataproc

Apache Spark has had a native Kubernetes scheduler since version 2.3, and Hadoop added native container support in Hadoop 3.0.3, both released in May 2018. However, Hadoop’s container support is still tied to HDFS and is too complex, in Google’s view.

“People have gotten Docker containers running on Hadoop clusters using YARN, but Hadoop 3’s container support is probably about four years too late,” said James Malone, product manager for Cloud Dataproc at Google. “It also doesn’t really solve the problems customers are trying to solve, from our perspective — customers don’t care about managing [Apache data warehouse and analytics apps] Hive or Pig, and want to use Kubernetes in hybrid clouds.”

Spark on Kubernetes only scratches the surface of big data integration

Cloud Dataproc’s Spark on Kubernetes implementation remains in a very early stage, and will require updates upstream to Spark as well as Kubernetes before it’s production-ready. Google also has its sights set on support for more Apache data analytics apps, including the Flink data stream processing framework, Druid low-latency data query system and Presto distributed SQL query engine.

“It’s still in alpha, and that’s by virtue of the fact that the work that we’ve done here has been split into multiple streams,” Malone said. One of those workstreams is to update Cloud Dataproc to run Kubernetes clusters. Another is to contribute to the upstream Spark Kubernetes operator, which remains in the experimental stage within Spark Core. Finally, Cloud Dataproc must brush up performance enhancement add-ons such as external shuffle service support, which aids in the dynamic allocation of resources.

For now, IT pros who want to run Spark on Kubernetes must assemble their own integrations among the upstream Spark Kubernetes scheduler, supported Spark from Databricks, and Kubernetes cloud services. Customers that seek hybrid cloud portability for Spark workloads must also implement a distributed storage system from vendors such as Robin Systems or Portworx. All of it can work, but without many of the niceties about fully integrated cloud platform services that would make life easier.

For example, the Spark Kubernetes executor uses Python, rather than the Scala programming language, which is a bit trickier to use.

“The Python experience of Spark in Kubernetes has always lagged the Scala experience, mostly because deploying a compiled artifact in Scala is just easier logistically than pulling in dependencies for Python jobs,” said Michael Bishop, co-founder and board member at Alpha Vertex, a New York-based fintech startup that uses machine learning deployed in a multi-cloud Kubernetes infrastructure to track market trends for financial services customers. “This is getting better and better, though.”

There also remain fundamental differences between Spark’s job scheduler and Kubernetes that must be smoothed out, Bishop said.

“There is definitely an impedance [between the two schedulers,” he said. “Spark is intimately aware of ‘where’ is for [nodes], while Kubernetes doesn’t really care beyond knowing a pod needs a particular volume mounted.”

Google will work on sanding down these rough edges, Malone pledged.

“For example, we have an external shuffle service, and we’re working hard to make it work with both YARN and Kubernetes Spark,” he said.

Go to Original Article
Author:

Microsoft and The Walt Disney Studios to develop ‘scene-to-screen’ content workflows – Stories

Companies collaborate to pilot new ways to transform content workflows in the Microsoft Azure cloud; Microsoft becomes a Disney Studios StudioLAB innovation partner

REDMOND, Wash., and BURBANK, Calif. Sept. 13, 2019 Microsoft Corp. and The Walt Disney Studios today announced a five-year innovation partnership to pilot new ways to create, produce and distribute content on the Microsoft Azure cloud platform. Through The Walt Disney Studios’ StudioLAB, a technology hub designed to create and advance the future of storytelling with cutting-edge tools and methods, the companies will deliver cloud-based solutions to help accelerate innovation at The Walt Disney Studios for production and postproduction processes, or from “scene to screen.”

The Walt Disney Studios’ StudioLAB logo“The cloud has reached a tipping point for the media industry, and it’s not surprising that The Walt Disney Studios, which has its heritage based on a passion for innovation and technology, is at the forefront of this transformation,” said Kate Johnson, president of Microsoft US. “The combination of Azure’s hyperscale capacity, global distribution, and industry-leading storage and networking capabilities with Disney’s strong history of industry leadership unlocks new opportunity in the media and entertainment space and will power new ways to drive content and creativity at scale. With Azure as the platform cloud for content, we’re excited to work with the team at StudioLAB to continue to drive innovation across Disney’s broad portfolio of studios.”

“By moving many of our production and postproduction workflows to the cloud, we’re optimistic that we can create content more quickly and efficiently around the world,” said Jamie Voris, CTO, The Walt Disney Studios. “Through this innovation partnership with Microsoft, we’re able to streamline many of our processes so our talented filmmakers can focus on what they do best.”

Microsoft and Disney — working closely with leading global media technology provider Avid — are already demonstrating that the kinds of demanding, high-performance workflows the media and entertainment industry requires can be deployed and operated with the security offered by the cloud, while unlocking substantial new benefits and efficiencies and enabling production teams to rethink the way they get their work done.

Microsoft logoBuilding on Microsoft’s strategic cloud alliance with Avid, the companies have already produced several essential media workflows running in the cloud today, including collaborative editing, content archiving, active backup and production continuity. Bringing these complex workflows into production using Avid solutions such as the Avid MediaCentral® platform, MediaCentral | Cloud UX™, Avid NEXIS® | Cloud storage and Avid Media Composer® — all running natively on Azure — will provide the foundation for helping transform content creation and content management to overcome today’s operational pressures, as well as pave the way for ongoing innovation.

Microsoft (Nasdaq “MSFT” @microsoft) enables digital transformation for the era of an intelligent cloud and an intelligent edge. Its mission is to empower every person and every organization on the planet to achieve more.

For more information, press only:

Microsoft Media Relations, WE Communications for Microsoft, (425) 638-7777, [email protected]

The Walt Disney Studios Communications, [email protected]

Note to editors: For more information, news and perspectives from Microsoft, please visit the Microsoft News Center at http://news.microsoft.com. Web links, telephone numbers and titles were correct at time of publication but may have changed. For additional assistance, journalists and analysts may contact Microsoft’s Rapid Response Team or other appropriate contacts listed at http://news.microsoft.com/microsoft-public-relations-contacts.

Go to Original Article
Author: Microsoft News Center

New details emerge about AWS Outposts as launch nears

AWS has provided more details about Outposts, the on-premises version of its IaaS cloud, in advance of its expected release late this year.

While AWS made Outposts a centerpiece of its re:Invent conference in December, and a fair amount of public information has been available, the company has now revealed which aspects and services of its public IaaS will ship in version 1 of Outposts.

AWS Outposts is aimed at customers who want the experience of running workloads on AWS inside their own data centers, for reasons such as latency and regulatory requirements.

It consists of server racks loaded with AWS software and is a fully managed offering, installed, operated and updated by AWS staff. Outpost machines will be continuously connected to a local AWS public cloud region.

Since re:Invent, AWS has worked with customers to figure out what types of services should be delivered in the first version of AWS Outposts. They will include several EC2 instance types — C5, M5, i3en and G4 — as well as Elastic Block Storage, AWS said in a blog post.

The general availability release of AWS Outposts will also support Amazon Elastic Container Service and Elastic Kubernetes Service, Elastic MapReduce and Amazon Relational Database Service, according to the blog. Subsequent additions will include the Amazon SageMaker machine learning platform, AWS said.

I have some clients [from whom] you cannot pry the data center from their cold dead hands.
Ryan MarshDevOps coach, TheStack.io

On paper, AWS Outposts are supposed to tie into any AWS public cloud service without issues. AWS also plans to port new public cloud capabilities to Outposts on a continuous basis, according to the blog.

Initial prospects for Outposts include companies in manufacturing, healthcare, telecom and financial services. A common use case concerns applications that require latency in the signal-digit millisecond range, AWS said.

Outposts follows cloud industry trend

AWS’ upcoming launch of Outposts ties into a trend where, for once, it is a laggard and not a pace-setter. Microsoft has already offered Azure Stack, Oracle has Exadata Cloud at Customer, IBM pushes Cloud Private, and Google moved into hybrid and on-premises scenarios with Anthos.

Holger MuellerHolger Mueller

The crucial element of Outposts is the system’s close similarity to AWS’ public cloud infrastructure, said Holger Mueller, an analyst at Constellation Research in Cupertino, Calif. IT decision-makers who want to develop on top of an Outpost shouldn’t have to look too closely at the fine print, which would slow them down, Mueller added.

Outposts should appeal to certain customers, said Ryan Marsh, a serverless expert and DevOps coach with TheStack.io in Houston.

Ryan MarshRyan Marsh

“The idea of doing serverless in an AWS Outpost is very enticing to me,” Marsh said. “I have some clients [from whom] you cannot pry the data center from their cold dead hands. There are, as [AWS mentions], some clients with obvious low-latency needs.”

Also, AWS has built and managed so many data centers that it’s likely had the ability to ship something like Outposts for a while, Marsh added. “It just needed to be productized, but not so soon that it would have cannibalized the cloud biz,” he said.

Go to Original Article
Author:

Navy sails SAP ERP systems to AWS GovCloud

The U.S. Navy has moved several SAP and other ERP systems from on premises to AWS GovCloud, a public cloud service designed to meet the regulatory and compliance requirements of U.S. government agencies.

The project entailed migrating 26 ERPs across 15 landscapes that were set up around 60,000 users across the globe. The Navy tapped SAP National Security Services Inc. (NS2) for the migration. NS2 was spun out of SAP specifically to sell SAP systems that adhere to the highly regulated conditions that U.S. government agencies operate under.

Approximately half of the systems that moved to AWS GovCloud were SAP ERP systems running on Oracle databases, according to Harish Luthra, president of NS2 secure cloud business. SAP systems were also migrated to the SAP HANA database, while non-SAP systems remain on their respective databases.

Architecture simplification and reducing TCO

The Navy wanted to move the ERP systems to take advantage of the new technologies that are more suited for cloud deployments, as well as to simplify the underlying ERP architecture and to reduce the total cost of ownership (TCO), Luthra said.

The migration enabled the Navy to reduce the data size from 80 TB to 28 TB after the migration was completed.

Harish LuthraHarish Luthra

“Part of it was done through archiving, part was disk compression, so the cost of even the data itself is reducing quite a bit,” Luthra said. “On the AWS GovCloud side, we’re using one of the largest instances — 12 terabytes — and will be moving to a 24 terabyte instance working with AWS.”

The Navy also added applications to consolidate financial systems and improve data management and analytics functionality.

“We added one application called the Universe of Transactions, based on SAP Analytics that allows the Navy to provide a consolidated financial statement between Navy ERP and their other ERPs,” Luthra said. “This is all new and didn’t exist before on-premises and was only possible to add because we now have HANA, which enables a very fast processing of analytics. It’s a giant amount of transactions that we are able to crunch and produce a consolidated ledger.”

Joe GioffreJoe Gioffre

Accelerated timeline

The project was done at an accelerated pace that had to be sped up even more when the Navy altered its requirements, according to Joe Gioffre, SAP NS2 project principal consultant. The original go-live date was scheduled for May 2020, almost two years to the day when the project began. However, when the Navy tried to move a command working capital fund onto the on-premises ERP system, it discovered the system could not handle the additional data volume and workload.

This drove the HANA cloud migration go-live date to August 2019 to meet the fiscal new year start of Oct. 1, 2019, so the fund could be included.

“We went into a re-planning effort, drew up a new milestone plan, set up Navy staffing and NS2 staffing to the new plan so that we could hit all of the dates one by one and get to August 2019,” Gioffre said. “That was a colossal effort in re-planning and re-resourcing for both us and the Navy, and then tracking it to make sure we stayed on target with each date in that plan.”

It’s not as hard as landing on the moon, but you’re still entering orbital space when you are going to these cloud implementations.
Joshua GreenbaumPrincipal, Enterprise Applications Consulting

Governance keeps project on track

Tight governance over the project was the key to completing it in the accelerated timeframe.

“We had a very detailed project plan with a lot of moving parts and we tracked everything in that project plan. If something started to fall behind, we identified it early and created a mitigation for it,” Gioffre explained. “If you have a plan that tracks to this level of detail and you fall behind, unless you have the right level of governance, you can’t execute mitigation quickly enough.”

The consolidation of the various ERPs onto one SAP HANA system was a main goal of the initiative, and it now sets up the Navy to take advantage of next-generation technology.

“The next step is planning a move to SAP S/4HANA and gaining process improvements as we go to that system,” he said.

Proving confidence in the public cloud

It’s not a particular revelation that public cloud hyperscaler storage providers like AWS GovCloud can handle huge government workloads, but it is notable that the Department of Defense is confident in going to the cloud, according to analyst Joshua Greenbaum, principal at Enterprise Applications Consulting, a firm based in Berkeley, Calif.

“The glitches that happened with Amazon recently and [the breach of customer data from Capital One] highlight the fact that we have a long way to go across the board in perfecting the cloud model,” Greenbaum said. “But I think that SAP and its competitors have really proven that stuff does work on AWS, Azure and, to a lesser extent, Google Cloud Platform. They have really settled in as legitimate strategic platforms and are now just getting the bugs out of the system.”

Greenbaum is skeptical that the project was “easy,” but it would be quite an accomplishment if it was done relatively painlessly.

“Every time you tell me it was easy and simple and painless, I think that you’re not telling me the whole story because it’s always going to be hard,” he said. “And these are government systems, so they’re not trivial and simple stuff. But this may show us that if the will is there and the technology is there, you can do it. It’s not as hard as landing on the moon, but you’re still entering orbital space when you are going to these cloud implementations, so it’s always going to be hard.”

Go to Original Article
Author:

Oracle OpenWorld 2019 coverage: Oracle seeks loftier cloud perch

Editor’s note

Oracle OpenWorld 2019 finds Oracle continuing to try to assert itself as a cloud leader but looking up at rival vendors that got the jump on it in the cloud.

That even applies to the company’s flagship database software, according to Gartner. It still ranked Oracle as the No. 1 database vendor overall in 2018 — but in the fast-growing cloud database segment, Oracle was fifth behind AWS, Microsoft, Google and China-based Alibaba. “They have a lot of catching up to do,” Gartner analyst Merv Adrian said in a session at the 2019 Pacific Northwest BI & Analytics Summit.

Oracle will make its cloud case at OpenWorld 2019. Follow our coverage of the conference and related Oracle developments here.

Go to Original Article
Author:

How to Configure a Quorum Cloud Witness for Failover Clustering

Windows Server Failover Clusters are becoming commonplace through the industry as the high-availability solution for virtual machines (VMs) and other enterprise applications. I’ve been writing about clustering since 2007 when I joined the engineering team at Microsoft (here is one of the most referenced online articles about quorum from 2011). Even today, one of the concepts that many users continue to misunderstand is a quorum. Most admins know that is has something to do with keeping a majority of servers running, but this blog post will give more insight into why it is important to understand how it works. We will focus on the newest type of quorum configuration known as a cloud witness which was introduced in Windows Server 2016. This solution is designed to support both on-premises clusters and multi-site clusters, along with the guest clusters which can run entirely in the Microsoft Azure public cloud.

Failover Clustering Quorum Fundamentals

NOTE: This post covers quorum for Windows Server 2016 and 2019. You can also info related to quorum on an older version of Windows Server.

Outside of IT, the term “quorum” is defined in business practices as “the number of members of a group or organization required to be present to transact business legally, usually a majority” (Source: Dictionary.com). For Windows Server Failover Clustering, it means that there must be a majority of “cluster voters” online and in communication with each other for the cluster to operate. A cluster voter is either a cluster node or a disk which contains a copy of the cluster database.

The cluster database is a file which defines registry settings that identify the state of every element within the cluster, including all nodes, storage, networks, virtual machines (VMs) and applications. It also keeps track of which node should the sole owner running each application and which node can write to each disk within the cluster’s shared storage. This is so important because it prevents a “split-brain” scenario which can cause corruption in a cluster’s database. A split-brain happens when there is a network partition between two sets of clusters nodes, and they both try to run the same application and write to the same disk in an uncoordinated fashion, which can lead to disk corruption.  By designating one of these sets of cluster nodes as the authoritative servers, and forcing the secondary set to remain passive, it ensures that exactly one node runs each application and writes to each disk. The determination of which partition of clusters nodes stays online is based on which side of the partition has a majority of cluster voters, or which side has a quorum.

For this reason, you should always have an odd number of votes across your cluster, meaning 51% or more of voters.  Here is a breakdown of the behavior based on the number of voting nodes or disks:

  • 2 Votes: This configuration is never recommended because both voters must be active for the cluster to stay online. If you lose communicate between voters, the cluster stays passive and will not run any workloads until both voters (a majority) are operational and in communication with each other.
  • 3 Votes: This works fine because one voter can be lost, and the cluster will remain operational, provided that two of the three voters are healthy.
  • 4 Votes: This can only sustain the loss of one voter and three voters must be active. This is supported but requires extra hardware yet provides no additional availability benefit and a three-vote cluster.
  • 5, 7, 9 … 65 Voters: An odd number of voters are recommended to maximize availability by allowing you to lose half (rounded down) of your voters. For example, in a nine-node cluster, you can lose four voters and it will continue to operate as five voters are active.
  • 6, 8, 10 … 64 Voters: This is supported, yet you can only lose half minus one voter, so you are not maximizing your availability. In a ten-node cluster you can only four voters, so five must remain in communication with each other. This provides the same level of availability as the previous example with nine, yet requires an additional server.

Using a Disk Witness for a Quorum Vote

Based on Microsoft’s telemetry data, a majority of failover clusters around the world are deployed with two nodes, to minimize the hardware costs. Although these two nodes only provide two votes, a third vote is provided by a shared disk, known as a “disk witness”. This disk can be any dedicated drive on a shared storage configuration that is supported by the cluster and passes the Validate a Cluster tests. This disk will also contain a copy of the cluster’s database, and just like every other clustered disk, exactly one node will own access to it. It does so by creating an open file handle on that ClusDB file. In the event where there is a network partition between the two servers, then the partition that owns the disk witness will get the extra vote and run all workloads (since it has two of three votes for quorum), while the partition with a single vote will not run anything until it can communicate with the other nodes. This configuration has been supported for several releases, however, there is still a hardware cost to providing a shared storage infrastructure, which is why a cloud witness was introduced in Windows Server 2016.

Cloud Witness for a Failover Cluster

A cloud witness is designed to provide a vote to a Failover Cluster without requiring any physical hardware. It is a basically a disk running in Microsoft Azure which contains a copy of the ClusDB and is accessible by all cluster nodes. It uses Microsoft Azure Blob Storage, and a single Azure Storage Account can be used for multiple clusters, although each cluster requires it owns blob file. The cluster database file itself is very small, which means that the cost to operate this cloud-based storage is almost negligible. The configuration is fairly easy and well-documented by Microsoft in its guide to Deploy a Cloud Witness for a Failover Cluster.

You will notice that the cloud witness is fully integrated within Failover Cluster Manager’s utility, Configure Cluster Quorum Witness, where you can Configure a cloud witness.

Selecting a Cloud Witness to use in the Configure Cluster Quorum Wizard

Selecting a Cloud Witness to use in the Configure Cluster Quorum Wizard

Next, you enter the Azure storage account name, key, and service endpoint.

Entering Cloud Witness details in Configure Cluster Quorum Wizard

Entering Cloud Witness details in Configure Cluster Quorum Wizard

Now you have added an extra vote to your failover cluster with much less effort and cost than creating and managing on-premises shared storage.

Failover Clustering Cloud Witness Scenarios

To conclude this blog post we’ll summarize the ideal scenarios for using the Cloud Witness:

  • On-premises clusters with no shared storage – For any even-node clusters with no extra shared storage, then consider using a cloud witness as an odd vote to help you determine quorum. This configuration also works well with SQL Always-On clusters and Scale-Out File Server clusters which may have no shared storage.
  • Multi-site clusters – If you have a multi-site cluster for disaster recovery, you will usually have two or more nodes at each site. If these balanced sites lose connectivity with each other, you still need a cluster voter to determine which side has quorum. By placing this arbitrating vote in a third site (a cloud witness in Microsoft Azure), it can serve as a tie-breaker to determine the authoritative cluster site.
  • Azure Guest Clusters – Now that you can deploy a failover cluster entirely within Microsoft Azure using nested virtualization (also known as a “guest cluster”), you can utilize the cloud witness as an additional cluster vote. This provides you with an end-to-end high-availability solution in the cloud.

The cloud witness is a great solution provided by Microsoft to increase availability in Failover Clusters while reducing the cost to customers. It is now easy to operate a two-node cluster without having to pay for a third host or shared storage disk, whose only role is to provide a vote. Consider using the cloud witness for your cluster deployments and look for Microsoft to continue to integrate its on-premises Windows Server solutions with Microsoft Azure as the industry’s leading hybrid cloud provider.

Go to Original Article
Author: Symon Perriman

Supporting modern technology policy for the financial services industry – guidelines by the European Banking Authority | Transform

The financial services community has unprecedented opportunity ahead. With new technologies like cloud, AI and blockchain, firms are creating new customer experiences, managing risk more effectively, combating financial crime, and meeting critical operational objectives. Banks, insurers and other services providers are choosing digital innovation to address these opportunities at a time when competition is increasing from every angle – from traditional and non-traditional players alike.

At the same time, our experience is that lack of clarity in regulation can hinder adoption of these exciting technologies, as regulatory compliance remains fundamental to financial institutions using technology they trust.  Indeed, the common question I get from customers is: Will regulators let me use your technology, and have you built in the capabilities to help me meet my compliance obligations?

A portrait of Dave Dadoun, assistant general counsel for Microsoft.
Dave Dadoun.

With this in mind, we applaud the European Banking Authority’s (EBA) revised Guidelines on outsourcing arrangements which, in part, address the use of cloud computing. For several years now we have shared perspectives with regulators on how regulation can be modernized to address cloud computing without diminishing the security, privacy, transparency and compliance safeguards necessary in a native cloud or hybrid-cloud world. In fact, cloud computing can afford financial institutions greater risk assurance – particularly on key things like managing data, securing data, addressing cyber threats and maintaining resilience.

At the core of the revised guidelines are a set of flexible principles addressing cloud in financial services. Indeed, the EBA has been clear these “guidelines are subject to the principle of proportionality,” and should be “applied in a manner that is appropriate, taking into account, in particular, the institution’s or payment institution’s size … and the nature, scope and complexity of its activities.” In addition, the guidelines set out to harmonize approaches across jurisdictions, a big step forward for financial institutions to have predictability and consistency among regulators in Europe. We think the EBA took this smart move to support leading-edge innovation and responsible adoption, and prepare for more advanced technology like machine learning and AI going forward.

Given these guidelines reflect a modernized approach that transcends Europe, we have updated our global Financial Services Amendment for customers to reflect these key changes. We have also created a regulatory mapping document which shows how our cloud services and underlying contractual commitments map to these requirements in an EU Checklist. The EU Checklist is accessible on the Microsoft Service Trust Portal. In essence, Europe offers the benchmark in establishing rules to permit use of cloud for financial services and we are proud to align to such requirements.

Because this is such an important milestone for the financial sector, we wanted to share our point-of-view on a few key aspects of the guidelines, which may help firms accelerate technology transformation with the Microsoft cloud going forward:

  • Auditability: As cloud has become more prevalent, we think it is natural to extend audit rights to cloud vendors in circumstances that warrant it. We also think that audits are not a one-size-fits-all approach but adaptable based on use cases – particularly whether it involves running core banking systems in the cloud. Microsoft has provided innovations to help supervise and audit hyper-scale cloud, including:
  • Data localization: We are pleased there are no data localization requirements in the EBA guidance. Rather, customers must assess the legal, security and other risks where data is stored, as opposed to mandating data be stored strictly in Europe. We help customers manage and assess such risk by providing:
    • Contractual commitments to store data at rest in a specified region (including Europe).
    • Transparency where data is stored.
    • Full commitments to meet key privacy requirements, like the General Data Protection Regulation (GDPR).
    • Flow-through of such commitments to our subcontractors.
  • Subcontractors. The guidelines address subcontractors, particularly those that provide “critical or important” functions. Management, governance and oversight of Microsoft’s subcontractors is core to what we do.  Among other things:
    • Microsoft’s subcontractors are subject to a vetting process and must follow the same privacy and governance controls we ourselves implement to protect customer data.
    • We provide transparency about subcontractors who may have access to customer data and provide 180 days notification about any new subcontractors as well.
    • We provide customers termination rights should they conclude a subcontractor presents a material increase in risk to a critical or important function of their operations.
  • Core platforms: We welcome the EBA’s position providing clarity that core platforms may run in the cloud. What matters is governance, documenting protocols, the security and resiliency of such systems, and having appropriate oversight (and audit rights), and commitments to terminate an agreement, if and when that becomes necessary. These are all capabilities Microsoft offers to its customers and we now see movement among leading banks to put core systems into our cloud because of the benefits we provide.
  • Business Continuity and Exit Planning. Institutions must have business continuity plans and test them periodically for use of critical or important functions. Microsoft has supported our customers to meet this requirement, including providing a Modern Cloud Risk Assessment toolkit and, in addition, in the Service Trust Portal documentation on our service resilience architecture, our Enterprise Business Continuity Management team (EBCM), and a quarterly report detailing results from our recent EBCM testing. In addition, we have supported our customers in preparing exit planning documentation, and we work with industry bodies like the European Banking Federation towards further industry guidance for these new EBA requirements.
  • Concentration risk: The EBA addresses the need to assess whether concentration risk may exist due to potential systemic failures in use of cloud services (and other legacy infrastructure). However, this is balanced with understanding what the risks are of a single point of failure, and to balance those risks and trade-offs from existing legacy systems. In short, financial institutions should assess the resiliency and safeguards provided with our hyper-scale cloud services, which can offer a more robust approach than systems in place today. When making those assessments, financial institutions may decide to lean-in more with cloud as they transform their businesses going forward.

The EBA framework is a great step forward to help modernize regulation and take advantage of cloud computing. We look forward to participating in ongoing industry discussion, such as new guidance under consideration by the European Insurance and Occupational Pension Authority concerning use of cloud services, as well as assisting other regions and countries in their journey to creating more modern policy that both supports innovation while protecting the integrity of critical global infrastructure.

For more information on Microsoft in the financial services industry, please go here.

Top photo courtesy of the European Banking Authority.

Go to Original Article
Author: Microsoft News Center