Tag Archives: Microsoft’s

How to Use ASR and Hyper-V Replica with Failover Clusters

In the third and final post of this blog series, we will evaluate Microsoft’s replication solutions for multi-site clusters and how to integrate basic backup/DR with them. This includes Hyper-V Replica, Azure Site Recovery, and DFS Replication. In the first part of the series, you learned about setting up failover clusters to work with DR solutions and in the last post, you learned about disk replication considerations from third-party storage vendors. The challenge with the solutions that we previously discussed is that they typically require third-party hardware or software. Let’s look at the basic technologies provided by Microsoft to reduce these upfront fixed costs.

Note: The features talked about in this article are native Microsoft features with a baseline level of functionality. Should you require over and above what is required here you should look at a third-party backup/replication product such as Altaro VM Backup.

Multi-Site Disaster Recovery with Windows Server DFS Replication (DFSR)

DFS Replication (DFSR) is a Windows Server role service that has been around for many releases. Although DFSR is built into Windows Server and is easy to configure, it is not supported for multi-site clustering. This is because the replication of files only happens when a file is closed, so it works great for file servers hosting documents. However, it is not designed to work with application workloads where the file is kept open, such as SQL databases or Hyper-V VMs. Since these file types will only close during a planned failover or unplanned crash, it is hard to keep the data consistent at both sites. This means that if your first site crashes, the data will not be available at the second site, so DFSR should not be considered as a possible solution.

Multi-Site Disaster Recovery with Hyper-V Replica

The most popular Microsoft DR solution is Hyper-V Replica which is a built-in Hyper-V feature and available to Windows Server customers at no additional cost. It copies the virtual hard disk (VHD) file of a running virtual machine from one host to a second host in a different location. This is an excellent low-cost solution to replicate your data between your primary and secondary sites and even allows you to do extended (“chained”) replication to a third location. However, it is limited in that is only replicates Hyper-V virtual machines (VMs) so it cannot be used for any other application unless they are virtualized and running inside a VM. The way it works is that any changes to the VHD file are tracked by a log file, which is copied to an offline VM/VHD in the secondary site. This also means that replication is also asynchronous, allowing copies to be sent every 30 seconds, 5 minutes or 15 minutes. While this means that there is no distance limitation between the sites, there could be some data loss if any in-memory data has not been written to the disk or if there is a crash between replication cycles.

Two Clusters Replicate Data between Sites with Hyper-V Replica

Figure 1 – Two Clusters Replicate Data between Sites with Hyper-V Replica

Hyper-V Replica allows for replication between standalone Hyper-V hosts or between separate clusters, or any combination.  This means that instead of stretching a single cluster across two sites, you will set up two independent clusters. This also allows for a more affordable solution by letting businesses set up a cluster in their primary site and a single host in their secondary site that will be used only for mission-critical applications. If the Hyper-V Replica is deployed on a failover cluster, a new clustered workload type is created, known as the Hyper-V Replica Broker. This basically makes the replication service highly-available, so that if a node crashes, the replication engine will failover to a different node and continue to copy logs to the secondary site, providing greater resiliency.

Another powerful feature of Hyper-V Replica is its built-in testing, allowing you to simulate both planned and unplanned failures to the secondary site.  While this solution will meet the needs of most virtualized datacenters, it is also important to remember that there are no integrity checks in the data which is being copied between the VMs. This means that if a VM becomes corrupted or is infected with a virus, that same fault will be sent to its replica. For this reason, backups of the virtual machine are still a critical part of standard operating procedure. Additionally, this Altaro blog notes that Hyper-V Replica has other limitations compared to backups when it comes to retention, file space management, keeping separate copies, using multiple storage locations, replication frequency and may have a higher total cost of ownership. If you are using a multi-site DR solution which uses two clusters, then make sure that you are taking and storing backups in both sites, so that you can recover your data at either location. Also make sure that your backup provider supports clusters, CSV disks, and Hyper-V replica, however, this is now standard in the industry.

Multi-Site Disaster Recovery with Azure Site Recovery (ASR)

All of the aforementioned solutions require you to have a second datacenter, which simply is not possible for some businesses.  While you could rent rack space from a cohosting facility, the economics just may not make sense. Fortunately, the Microsoft Azure public cloud can now be used as your disaster recovery site using Azure Site Recovery (ASR). This technology works with Hyper-V Replica, but instead of copying your VMs to a secondary site, you are pushing them to a nearby Microsoft datacenter. This technology still has the same limitations of Hyper-V Replica, including the replication frequency, and furthermore you do not have access to the physical infrastructure of your DR site in Azure. The replicated VM can run on the native Azure infrastructure, or you can even build a virtualized guest cluster, and replicate to this highly-available infrastructure.

While ASR is a significantly cheaper solution than maintaining your own hardware in the secondary site, it is not free. You have to pay for the service, the storage of your virtual hard disks (VHDs) in the cloud, and if you turn on any of those VMs, you will pay for standard Azure VM operating costs.

If you are using ASR, you should follow the same backup best practices as mentioned in the earlier Hyper-V Replica section. The main difference will be that you should use an Azure-native backup solution to protect your replicated VHDs in Azure, in case you switch over the Azure VMs for any extended period of time.

Conclusion

From reviewing this blog series, you should be equipped to make the right decisions when planning your disaster recovery solution using multi-site clustering.  Start by understanding your site restrictions and from there you can plan your hardware needs and storage replication solution.  There are a variety of options that have tradeoffs between a higher price with more features to cost-effective solutions using Microsoft Azure, but have limited control. Even after you have deployed this resilient infrastructure, keep in mind that there are still three main reasons why disaster recovery plans fail:

  • The detection of the outage failed, so the failover to the secondary datacenter never happens.
  • One component in the DR failover process does not work, which is usually due to poor or infrequent testing.
  • There was no automation or some dependency on humans during the process, which failed as humans create a bottleneck and are unreliable during a disaster.

This means that whichever solution you choose, make sure that it is well tested with quick failure detection and try to eliminate all dependencies on humans! Good luck with your deployment and please post any questions that you have in the comments section of this blog.


Go to Original Article
Author: Symon Perriman

With support for Windows 7 ending, a look back at the OS

With Microsoft’s support for Windows 7 ending this week, tech experts and IT professionals remembered the venerable operating system as a reliable and trustworthy solution for its time.

The OS was launched in 2009, and its official end of life came Tuesday, Jan. 14.

Industry observers spoke of Windows 7 ending, remembering the good and the bad of an OS that managed to hold its ground during the explosive rise of mobile devices and the growing popularity of web applications.

An old reliable

Stephen Kleynhans, research vice president at Gartner, said Windows 7 was a significant step forward from Windows XP, the system that had previously gained dominance in the enterprise.

Stephen KleynhansStephen Kleynhans

“Windows 7 kind of defined computing for most enterprises over the last decade,” he said. “You could argue it was the first version of Windows designed with some level of true security in mind.”

Windows 7 introduced several new security features, including enhanced Encrypting File System protection, increased control of administrator privileges and allowing for multiple firewall policies on a single system.

The OS, according to Kleynhans, also provided a comfortable familiarity for PC users.

“It was a really solid platform that businesses could build on,” he said. “It was a good, solid, reliable OS that wasn’t too flashy, but supported the hardware on the market.”

“It didn’t put much strain on its users,” he added. “It fit in with what they knew.”

Eric Klein, analyst at VDC Research Group Inc., said the launch of Windows 7 was a positive move from Microsoft following the “debacle” that was Windows Vista — the immediate predecessor of Windows 7, released in 2007.

“Vista was a very big black eye for Microsoft,” he said. “Windows 7 was more well-refined and very stable.”

Eric KleinEric Klein

The fact that Windows 7 could be more easily administered than previous iterations of the OS, Klein said, was another factor in its enterprise adoption.

“So many businesses, small businesses included, really were all-in for Windows 7,” he said. “It was reliable and securable.”

Windows 7’s longevity, Klein said, was also due to slower hardware refresh rates, as companies often adopt new OSes when buying new computers. With web applications, there is less of a need for individual desktops to have high-end horsepower — meaning users can get by with older machines for longer.

Mark BowkerMark Bowker

“Ultimately, it was a well-tuned OS,” said Mark Bowker, senior analyst at Enterprise Strategy Group. “It worked, so it became the old reliable for a lot of organizations. Therefore, it remains on a lot of organizations’ computers, even at its end of life.”

Even Microsoft saw the value many enterprises placed in Windows 7 and responded by continuing support, provided customers pay for the service, according to Bowker. The company is allowing customers to pay for extended support for a maximum of three years past the January 14 end of life.

Early struggles for Windows 7

Kleynhans said, although the OS is remembered fondly, the switch from Windows XP was far from a seamless one.

“What people tend to forget about the transition from XP to 7 was that it was actually pretty painful,” he said. “I think a lot of people gloss over the fact that the early days with Windows 7 were kind of rough.”

The biggest issue with that transition was with compatibility, Kleynhans said.

“At the time, a lot of applications that ran on XP and were developed on XP were not developed with a secure environment in mind,” he said. “When they were dropped into Windows 7, with its tighter security, a lot of them stopped working.”

Daniel BeatoDaniel Beato

Daniel Beato, director of technology at IT consulting firm TNTMAX, recalled some grumbling about a hard transition from Windows XP.

“At first, like with Windows 10, everyone was complaining,” he said. “As it matured, it became something [enterprises] relied on.”

A worthy successor?

Windows 7 is survived by Windows 10, an OS that experts said is in a better position to deal with modern computing.

“Windows 7 has fallen behind,” Kleynhans said. “It’s a great legacy system, but it’s not really what we want for the 2020s.”

Companies, said Bowker, may be hesitant to upgrade OSes, given the complications of the change. Still, he said, Windows 10 features make the switch more alluring for IT admins.

“Windows 10, especially with Office 365, starts to provide a lot of analytics back to IT. That data can be used to see how efficiently [an organization] is working,” he said. “[Windows 10] really opens eyes with the way you can secure a desktop… the way you can authenticate users. These things become attractive [and prompt a switch].”

Klein said news this week of a serious security vulnerability in Windows underscored the importance of regular support.

“[The vulnerability] speaks to the point that users cannot feel at ease, regardless of the fact that, in 2020, Windows is a very, very enterprise-worthy and robust operating system that is very secure,” he said. “Unfortunately, these things pop up over time.”

The news, Klein said, only underlines the fact that, while some companies may wish to remain with Windows 7, there is a large community of hackers who are aware of these vulnerabilities — and aware that the company is ending support for the OS.

Beato said he still had customers working on Windows 7, but most people with whom he worked had made the switch to Windows 10. Microsoft, he said, had learned from Windows XP and provided a solid pathway to upgrade from Windows 7 to Windows 10.

The future of Windows

Klein noted that news about the next version of Windows would likely be coming soon. He wondered whether the trend toward keeping the smallest amount of data possible on local PCs would affect its design.

“Personally, I’ve found Microsoft to be the most interesting [of the OS vendors] to watch,” he said, calling attention to the company’s willingness to take risks and innovate, as compared to Google and Apple. “They’ve clearly turned the page from the [former Microsoft CEO Steve] Ballmer era.”

Go to Original Article
Author:

Hyper-V Powering Windows Features

December 2019

Hyper-V is Microsoft’s hardware virtualization technology that initially released with Windows Server 2008 to support server virtualization and has since become a core component of many Microsoft products and features. These features range from enhancing security to empowering developers to enabling the most compatible gaming console. Recent additions to this list include Windows Sandbox, Windows Defender Application Guard, System Guard and Advanced Threat Detection, Hyper-V Isolated-Containers, Windows Hypervisor Platform and Windows Subsystem for Linux 2. Additionally, applications using Hyper-V, such as Kubernetes for Windows and Docker Desktop, are also being introduced and improved.

As the scope of Windows virtualization has expanded to become an integral part of the operating system, many new OS capabilities have taken a dependency on Hyper-V. Consequently, this created compatibility issues with many popular third-party products that provide their own virtualization solutions, forcing users to choose between applications or losing OS functionality. Therefore, Microsoft has partnered extensively with key software vendors such as VMware, VirtualBox, and BlueStacks to provide updated solutions that directly leverage Microsoft virtualization technologies, eliminating the need for customers to make this trade-off.

Windows Sandbox is an isolated, temporary, desktop environment where you can run untrusted software without the fear of lasting impact to your PC.  Any software installed in Windows Sandbox stays only in the sandbox and cannot affect your host. Once Windows Sandbox is closed, the entire state, including files, registry changes and the installed software, are permanently deleted. Windows Sandbox is built using the same technology we developed to securely operate multi-tenant Azure services like Azure Functions and provides integration with Windows 10 and support for UI based applications.

Windows Defender Application Guard (WDAG) is a Windows 10 security feature introduced in the Fall Creators Update (Version 1709 aka RS3) that protects against targeted threats using Microsoft’s Hyper-V virtualization technology. WDAG augments Windows virtualization based security capabilities to prevent zero-day kernel vulnerabilities from compromising the host operating system. WDAG also enables enterprise users of Microsoft Edge and Internet Explorer (IE) protection from zero-day kernel vulnerabilities by isolating a user’s untrusted browser sessions from the host operating system. Security conscious enterprises use WDAG to lock down their enterprise host while allowing their users to browse non-enterprise content.

Application Guard isolates untrusted sites using a new instance of Windows at the hardware layer.

In order to protect critical resources such as the Windows authentication stack, single sign-on tokens, the Windows Hello biometric stack, and the Virtual Trusted Platform Module, a system’s firmware and hardware must be trustworthy. Windows Defender System Guard reorganizes the existing Windows 10 system integrity features under one roof and sets up the next set of investments in Windows security. It’s designed to make these security guarantees:

  • To protect and maintain the integrity of the system as it starts up
  • To validate that system integrity has truly been maintained through local and remote attestation

Detecting and stopping attacks that tamper with kernel-mode agents at the hypervisor level is a critical component of the unified endpoint protection platform in Microsoft Defender Advanced Threat Protection (Microsoft Defender ATP). It’s not without challenges, but the deep integration of Windows Defender Antivirus with hardware-based isolation capabilities allows the detection of artifacts of such attacks.

Hyper-V plays an important role in the container development experience on Windows 10. Since Windows containers require a tight coupling between its OS version and the host that it runs on, Hyper-V is used to encapsulate containers on Windows 10 in a transparent, lightweight virtual machine. Colloquially, we call these “Hyper-V Isolated Containers”. These containers are run in VMs that have been specifically optimized for speed and efficiency when it comes to host resource usage. Hyper-V Isolated Containers most notably allow developers to develop for multiple Linux distros and Windows at the same time and are managed just like any container developer would expect as they integrate with all the same tooling (e.g. Docker).

The Windows Hypervisor Platform (WHP) adds an extended user-mode API for third-party virtualization stacks and applications to create and manage partitions at the hypervisor level, configure memory mappings for the partition, and create and control execution of virtual processors. The primary value here is that third-party virtualization software (such as VMware) can co-exist with Hyper-V and other Hyper-V based features. Virtualization-Based Security (VBS) is a recent technology that has enabled this co-existence.

WHP provides an API similar to that of Linux’s KVM and macOS’s Hypervisor Framework, and is currently leveraged on projects by QEMU and VMware.

This diagram provides a high-level overview of a third-party architecture.

WSL 2 is the newest version of the architecture that powers the Windows Subsystem for Linux to run ELF64 Linux binaries on Windows. Its feature updates include increased file system performance as well as added full system call compatibility. This new architecture changes how these Linux binaries interact with Windows and your computer’s hardware, but still provides the same user experience as in WSL 1 (the current widely available version). The main difference being that WSL 2 uses a new architecture, which is primarily running a true Linux kernel inside a virtual machine. Individual Linux distros can be run either as a WSL 1 distro, or as a WSL 2 distro, can be upgraded or downgraded at any time, and can run WSL 1 and WSL 2 distros side by side.

Kubernetes started officially supporting Windows Server in production with the release of Kubernetes version 1.14 (in March 2019). Windows-based applications constitute a large portion of the workloads in many organizations. Windows containers provide a modern way for these Windows applications to use DevOps processes and cloud native patterns. Kubernetes has become the de facto standard for container orchestration; hence this support enables a vast ecosystem of Windows applications to not only leverage the power of Kubernetes, but also to leverage the robust and growing ecosystem surrounding it. Organizations with investments in both Windows-based applications and Linux-based applications no longer need to look for separate orchestrators to manage their workloads, leading to increased operational efficiencies across their deployments. The engineering that supported this release relied upon open source and community led approaches that originally brought Windows Server containers to Windows Server 2016.

These components and tools have allowed Microsoft’s Hyper-V technology to introduce new ways of enabling customer experiences. Windows Sandbox, Windows Defender Application Guard, System Guard and Advanced Threat Detection, Hyper-V Isolated-Containers, Windows Hypervisor Platform and Windows Subsystem for Linux 2 are all new Hyper-V components that ensure the security and flexibility customers should expect from Windows. The coordination of applications using Hyper-V, such as Kubernetes for Windows and Docker Desktop also represent Microsoft’s dedication to customer needs, which will continue to stand for our main sentiment going forward.

Go to Original Article
Author: nickeaton

Office 365 vs. G Suite: Google embraces UC to rival Microsoft

For Google, the unified communications market is a means to an end: keeping G Suite competitive with Microsoft’s Office 365. In 2020, Google plans to close in on the Microsoft suite’s core communication features by migrating businesses to Hangouts Chat, the messaging complement to G Suite’s calling and video conferencing apps.

In mid-2020, Hangouts Chat will replace an older, more basic chat app called Hangouts. While the new app is an improvement, Google will have to add features and build a much larger partner ecosystem to reach par with Office 365.

What’s more, Google’s strategy of maintaining separate products for core communications services is at odds with the direction of the market. Vendors like Microsoft have consolidated calling, messaging and meetings services into a single user interface. But Google is keeping Hangouts Chat distinct from the video conferencing app Hangouts Meet.

“Their challenges are more related to fundamentally who they are,” TJ Keitt, an analyst at Forrester Research, said. “They’re a company that, for a while, had struggled to indicate they understand all the things that large enterprises require.”

G Suite has trailed Office 365 for years. In particular, Google has struggled to appeal to organizations with thousands and tens of thousands of employees. Those customers often require complex feature sets, but Google likes to keep things simple.

“It’s really important for us to provide just really simple, delightful experiences that work,” Smita Hashim, manager of G Suite’s communications apps, said in December. “It’s not like we need every bell and whistle and every feature.”

In 2019, Google tackled low-hanging fruit that had been standing in the way of selling G Suite to customers with thousands of employees. Giving customers some control over where their data is stored was a significant change. Also, adding numerous IT controls and security backstops was critical to enterprises.

But Google does not appear interested in matching Office 365 feature-for-feature. Instead, analysts expect the company will seek to grow G Suite in 2020 and beyond by focusing on specific industries and kinds of companies.

“If Google plays the long game, they don’t need to really worry about whether or not they are beating Microsoft in a lot of the companies that are here right now,” Keitt said. Instead, Google can target new and adolescent companies that haven’t bought into Office 365.

Google’s targets will likely include the verticals of education and technology, as well as fast-growing businesses with a young workforce. The company has already won some big names. In 2019, G Suite added tech company Iron Mountain, with 26,000 employees, and Whirlpool, with 92,000 employees.

In 2020, Google needs to decide whether to get serious about building a communications portfolio on par with Microsoft’s. That would entail expanding the business calling service it launched this year, Google Voice for G Suite.

So far, the vendor has signaled it will keep the calling service simple. Whereas traditional telephony systems offer upwards of 200 features, Google opted for fewer than 20. The new year will likely bring only incremental changes, such as the certification of more desk phones.

“I think, incrementally, they are continuing to improve. They are trying to close the gap,” said Irwin Lazar, an analyst at Nemertes Research. “What I haven’t seen Google really try to do is leapfrog the market.”

Nevertheless, the cloud productivity market is likely still a lucrative one for Google. As of February, 5 million organizations subscribed to G Suite, some paying as much as $25 per user, per month. 

Google Cloud, a division that includes G Suite as well as the vendor’s infrastructure-as-a-service platform, was on track to generate $8 billion in annual revenue as of July.

“Being number two in a multi-billion-dollar [office productivity] market is fine,” said Jeffrey Mann, an analyst at Garter.

Go to Original Article
Author:

US Army HR expects migration to Azure cloud will cut IT costs

The U.S Army is moving its civilian HR services from on-premises data centers to Microsoft’s cloud. The migration to Azure has the makings of a big change. Along with shifting Army HR services to the cloud, it plans to move off some of its legacy applications. 

It’s a move that the Army said will give it more flexibility and reduce its costs.

The Army Civilian Human Resources Agency (CHRA) is responsible for supporting approximately 300,000 Army civilian employees and 33,000 Department of Defense employees. It provides a full-range of HR services.

The migration to Azure was noted in a contract announcement by Accelera Solutions Inc., a  systems integrator based in Fairfax, Va. The $40.4 million Army contract is for three years. The firm is a Microsoft federal cloud partner.

The federal government, including the Department of Defense, is broadly consolidating data centers and shifting some systems to the cloud.

Shift to cloud will improve HR capabilities

The Army said it is moving its civilian HR services to the cloud for three reasons. The Army “has determined that the cloud is the most effective way to host CHRA operated programs,” said Matthew Leonard, an Army spokesperson, in an email. It also needs “a more agile operating environment,” he said.

The third benefit of migrating to Azure “will allow for improved overall capabilities at lower cost,” Leonard said. “We will not need to expend resources to maintain data centers and expensive hardware,” he said.

Some of the Army’s savings will come by turning off resources outside of business hours, such as those used for development.

The Army didn’t provide an estimate of cost savings. But the Defense Department, in budget documents, has estimated cumulative data center consolidation savings of $751 million from 2017 to 2024.

Our goal is to significantly reduce the number of applications through the use of modern, out-of-the-box platforms.
Matthew LeonardSpokesperson, Army

Some existing Army HR applications will undergo a migration to Azure, but new cloud-based HR applications will be also be adopted as part of this shift. 

“Our goal is to significantly reduce the number of applications through the use of modern, out-of-the-box platforms,” Leonard said. Overtime, the Army plans to move other applications to the cloud.

Accelera declined to comment on the award, but in its announcement said its work includes migrating the Army’s HR applications from on-premises data center to the Azure cloud. It will also operate the cloud environment.

Microsoft recently was awarded a broader Defense Department contract to host military services in its Azure cloud. The Joint Enterprise Defense Infrastructure contract, or JEDI, is estimated at $10 million over 10 years. AWS, a JEDI contract finalist, is challenging the award in court. 

“The CHRA cloud initiative does seem to be driven more by the data center consolidation initiative that’s been around since the Obama administration, and much less by the current flap over JEDI,” said Ray Bjorklund, president of government IT market research firm BirchGrove Consulting LLC in Maryland. Migration to the cloud has been a “recurring method” of IT consolidation, he said. 

Go to Original Article
Author:

Do you know the difference in the Microsoft HCI programs?

While similar in name, Microsoft’s Azure Stack and Azure Stack HCI products are substantially different product offerings designed for different use cases.

Azure Stack brings Azure cloud capabilities into the data center for organizations that want to build and run cloud applications on localized resources. Azure Stack HCI operates on the same Hyper-V-based, software-driven compute, storage and networking technologies but serves a fundamentally different purpose. This new Microsoft HCI offering is a hyper-converged infrastructure product that combines vendor-specific hardware with Windows Server 2019 Datacenter edition and management tools to provide a highly integrated and optimized computing platform for local VM workloads.

Azure Stack gives users a way to employ Azure VMs for Windows and Linux, Azure IoT and Event Hubs, Azure Marketplace, Docker containers, Azure Key Vault, Azure Resource Manager, Azure Web Apps and Functions, and Azure administrative tools locally. This functionality gives an organization the benefits of Azure cloud operation, while also satisfying regulatory requirements that require workloads to run in the data center.

Azure Stack HCI offers optional connections to an array of Azure cloud services, including Azure Site Recovery, Azure Monitor, Azure Backup, Azure Update Management, Azure File Sync and Azure Network Adapter. However, these workloads remain in the Azure cloud. Also, there is no way to convert this Microsoft HCI product into an Azure Stack deployment.

Azure Stack HCI evolved from Microsoft’s WSSD HCI offering.

Windows Server Software-Defined products still exist

Azure Stack HCI evolved from Microsoft’s Windows Server Software-Defined (WSSD) HCI offering. The WSSD program still exists, but the main difference on the software side is hardware in the WSSD program runs on the Windows Server 2016 OS.

WSSD HCI is similar to Azure Stack HCI with a foundation of vendor-specific hardware, the inclusion of Windows Server technologies — Hyper-V, Storage Spaces Direct and software-defined networking — and Windows Admin Center for systems management. Azure Stack HCI expands on WSSD through improvements to Windows Server 2019 and tighter integration with Azure services.

Go to Original Article
Author:

How to Create and Manage Hot/Cold Tiered Storage

When I was working in Microsoft’s File Services team around 2010, one of the primary goals of the organization was to commoditize storage and make it more affordable to enterprises. Legacy storage vendors offered expensive products, often consuming a majority of the budget of the IT department and they were slow to make improvements because customers were locked in. Since then, every release of Windows Server has included storage management features which were previously only provided by storage vendors, such as deduplication, replication, and mirroring. These features could be used to manage commodity storage arrays and disks, reducing costs and eliminating vendor lock-in. Windows Server now offers a much-requested feature, the ability to move files between different tiers of “hot” (fast) storage and “cold” (slow) storage.

Managing hot/cold storage is conceptually similar to computer memory cache but at an enterprise scale. Files which are frequently accessed can be optimized to run on the hot storage, such as faster SSDs. Meanwhile, files which are infrequently accessed will be pushed to cold storage, such as older or cheaper disks. These lower priority files will also take advantage of file compression techniques like data deduplication to maximize storage capacity and minimize cost. Identical or varying disk types can be used because the storage is managed as a pool using Windows Server’s storage spaces, so you do not need to worry about managing individual drives. The file placement is controlled by the Resilient File System (ReFS), a file system which is used to optimize and rotate data between the “hot” and “cold” storage tiers in real-time based on their usage. However, using tiered storage is only recommended for workloads that are not regularly accessed. If you have permanently running VMs or you are using all the files on a given disk, there would be little benefit in allocating some of the disk to cold storage. This blog post will review the key components required to deploy tiered storage in your datacenter.

Overview of Resilient File System (ReFS) with Storage Tiering

The Resilient File System was first introduced in Windows Server 2012 with support for limited scenarios, but it has been greatly enhanced through the Windows Server 2019 release. It was designed to be efficient, support multiple workloads, avoid corruption and maximize data availability. More specifically to tiering though, ReFS divides the pool of storage into two tiers automatically, one for high-speed performance and one of maximizing storage capacity. The performance tier receives all the writes on the faster disk for better performance. If those new blocks of data are not frequently accessed, the files will gradually be moved to the capacity tier. Reads will usually happen from the capacity tier, but can also happen from the performance tier as needed.

Storage Spaces Direct and Mirror-Accelerated Parity

Storage Spaces Direct (S2D) is one of Microsoft’s enhancements designed to reduce costs by allowing servers with Direct Attached Storage (DAS) drives to support Windows Server Failover Clustering. Previously, highly-available file server clusters required some type of shared storage on a SAN or used an SMB file share, but S2D allows for small local clusters which can mirror the data between nodes. Check out Altaro’s blog on Storage Spaces Direct for in-depth coverage on this technology.

With Windows Server 2016 and 2019, S2D offers mirror-accelerated parity which is used for tiered storage, but it is generally recommended for backups and less frequently accessed files, rather than heavy production workloads such as VMs. In order to use tiered storage with ReFS, you will use mirror-accelerated parity. This provides decent storage capacity by using both mirroring and a parity drive to help prevent and recover from data loss. In the past, mirroring and parity would conflict and you would usually have to select one of the other.  Mirror-accelerator parity works with ReFS by taking writes and mirroring them (hot storage), then using parity to optimize their storage on disk (cold storage). By switching between these storage optimizations techniques, ReFS provides admins with the best of both worlds.

Creating Hot and Cold Tiered Storage

When configuring hot and cold storage you get to define the ratio of the hot and cold storage. For most workloads, Microsoft recommends allocating 20% to hot and 80% to cold. If you are using high-performance workloads, consider having more hot memory to support more writes. On the flip-side, if you have a lot of archival files, then allocate more cold memory. Remember that with a storage pool you can combine multiple disk types under the same abstracted storage space. The following PowerShell cmdlets show you how to configure a 1,000 GB disk to use 20% (200 GB) for performance (hot storage) and 80% (800 GB) for capacity (cold storage).

Managing Hot and Cold Tiered Storage

If you want to increase the performance of your disk, then you will allocate a great percentage of the disk to the performance (hot) tier. In the following example we use the PowerShell cmdlets to create a 30:70 ratio between the tiers:

Unfortunately, this resizing only changes the ratios of the disks but does not change the size of the partition or volume, so you likely also want to change these using the Resize-Volumes cmdlets.

Optimizing Hot and Cold Storage

Based on the types of workloads you are using, you may wish to further optimize when data is moved between hot and cold storage, which is known as the “aggressiveness” of the rotation. By default, the hot storage will use wait until 85% of its capacity is full before it begins to send data to the cold storage. If you have a lot of write traffic going to the hot storage then you want to reduce this value so that performance-tier data gets pushed to the cold storage quicker. If you have fewer write requests and want to keep data in hot storage longer then you can increase this value. Since this is an advanced configuration option, it must be configured via the registry on every node in the S2D cluster, and it also requires a restart. Here is a sample script to run on each node if you want to change the aggressiveness so that it swaps files when the performance tier reaches 70% capacity:

You can apply this setting cluster-wide by using the following cmdlet:

NOTE: If this is applied to an active cluster, make sure that you reboot one node at a time to maintain service availability.

Wrap-Up

Now you should be fully equipped with the knowledge to optimize your commodity storage using the latest Windows Server storage management features. You can pool your disks with storage spaces, use storage spaces direct (S2D) to eliminate the need for a SAN, and ReFS to optimize the performance and capacity of these drives.  By understanding the tradeoffs between performance and capacity, your organization can significantly save on storage management and hardware costs. Windows Server has made it easy to centralize and optimize your storage so you can reallocate your budget to a new project – or to your wages!

What about you? Have you tried any of the features listed in the article? Have they worked well for you? Have they not worked well? Why or why not? Let us know in the comments section below!


Go to Original Article
Author: Symon Perriman

Analyzing data from space – the ultimate intelligent edge scenario – The Official Microsoft Blog

Space represents the next frontier for cloud computing, and Microsoft’s unique approach to partnerships with pioneering companies in the space industry means together we can build platforms and tools that foster significant leaps forward, helping us gain deeper insights from the data gleaned from space.

One of the primary challenges for this industry is the sheer amount of data available from satellites and the infrastructure required to bring this data to ground, analyze the data and then transport it to where it’s needed. With almost 3,000 new satellites forecast to launch by 20261 and a threefold increase in the number of small satellite launches per year, the magnitude of this challenge is growing rapidly.

Essentially, this is the ultimate intelligent edge scenario – where massive amounts of data must be processed at the edge – whether that edge is in space or on the ground. Then the data can be directed to where it’s needed for further analytics or combined with other data sources to make connections that simply weren’t possible before.

DIU chooses Microsoft and Ball Aerospace for space analytics

To help with these challenges, the Defense Innovation Unit (DIU) just selected Microsoft and Ball Aerospace to build a solution demonstrating agile cloud processing capabilities in support of the U.S. Air Force’s Commercially Augmented Space Inter Networked Operations (CASINO) project.

With the aim of making satellite data more actionable more quickly, Ball Aerospace and Microsoft teamed up to answer the question: “what would it take to completely transform what a ground station looks like, and downlink that data directly to the cloud?”

The solution involves placing electronically steered flat panel antennas on the roof of a Microsoft datacenter. These phased array antennas don’t require much power and need only a couple of square meters of roof space. This innovation can connect multiple low earth orbit (LEO) satellites with a single antenna aperture, significantly accelerating the delivery rate of data from satellite to end user with data piped directly into Microsoft Azure from the rooftop array.

Analytics for a massive confluence of data

Azure provides the foundational engine for Ball Aerospace algorithms in this project, processing worldwide data streams from up to 20 satellites. With the data now in Azure, customers can direct that data to where it best serves the mission need, whether that’s moving it to Azure Government to meet compliance requirements such as ITAR or combining it with data from other sources, such as weather and radar maps, to gain more meaningful insights.

In working with Microsoft, Steve Smith, Vice President and General Manager, Systems Engineering Solutions at Ball Aerospace called this type of data processing system, which leverages Ball phased array technology and imagery exploitation algorithms in Azure, “flexible and scalable – designed to support additional satellites and processing capabilities. This type of data processing in the cloud provides actionable, relevant information quickly and more cost-effectively to the end user.”

With Azure, customers gain its advanced analytics capabilities such as Azure Machine Learning and Azure AI. This enables end users to build models and make predictions based on a confluence of data coming from multiple sources, including multiple concurrent satellite feeds. Customers can also harness Microsoft’s global fiber network to rapidly deliver the data to where it’s needed using services such as ExpressRoute and ExpressRoute Global Reach. In addition, ExpressRoute now enables customers to ingest satellite data from several new connectivity partners to address the challenges of operating in remote locations.

For tactical units in the field, this technology can be replicated to bring information to where it’s needed, even in disconnected scenarios. As an example, phased array antennas mounted to a mobile unit can pipe data directly into a tactical datacenter or Data Box Edge appliance, delivering unprecedented situational awareness in remote locations.

A similar approach can be used for commercial applications, including geological exploration and environmental monitoring in disconnected or intermittently connected scenarios. Ball Aerospace specializes in weather satellites, and now customers can more quickly get that data down and combine it with locally sourced data in Azure, whether for agricultural, ecological, or disaster response scenarios.

This partnership with Ball Aerospace enables us to bring satellite data to ground and cloud faster than ever, leapfrogging other solutions on the market. Our joint innovation in direct satellite-to-cloud communication and accelerated data processing provides the Department of Defense, including the Air Force, with entirely new capabilities to explore as they continue to advance their mission.

  1. https://www.satellitetoday.com/innovation/2017/10/12/satellite-launches-increase-threefold-next-decade/

Tags: ,

Go to Original Article
Author: Microsoft News Center

Microsoft PowerApps pricing proposal puts users on edge

BOSTON — Microsoft’s proposed licensing changes for PowerApps, the cloud-based development tools for Office 365 and Dynamics 365, have confused users and made them fearful the software will become prohibitively expensive.

Last week, at Microsoft’s SPTechCon user conference, some organizations said the pricing changes, scheduled to take effect Oct. 1, were convoluted. Others said the new pricing — if it remains as previewed by Microsoft earlier this summer — would force them to limit the use of the mobile app development tools.

“We were at the point where we were going to be expanding our usage, instead of using it for small things, using it for larger things,” Katherine Prouty, a developer at the nonprofit Greater Lynn Senior Services, based in Lynn, Mass., said. “This is what our IT folks are always petrified of; [the proposed pricing change] is confirmation of their worst nightmares.”

This is what our IT folks are always petrified of; this is confirmation of their worst nightmares.
Katherine ProutyDeveloper, Greater Lynn Senior Services

Planned apps the nonprofit group might have to scrap if the pricing changes take effect include those for managing health and safety risks for its employees and clients in a regulatory-compliant way, and protecting the privacy of employees as they post to social media on behalf of the organization, Prouty said.

Developers weigh in

The latest pricing proposal primarily affects organizations building PowerApps that tap data sources outside of Office 365 and Dynamics 365. People connecting to Salesforce, for example, would pay $10 per user, per month, unless they opt to pay $40 per user, per month for unlimited use of data connectors to third-party apps.

The new pricing would take effect even if customers were only connecting Office 365 to Dynamics 365 or vice versa. That additional cost for using apps they’re already paying for does not sit well with some customers, while others find the pricing scheme perplexing. 

“It’s all very convoluted right now,” said David Drever, senior manager at IT consultancy Protiviti, based in Menlo Park, Calif.

Manufacturing and service companies that create apps using multiple data sources are among the businesses likely to pay a lot more in PowerApps licensing fees, said IT consultant Daniel Christian of PowerApps911, based in Maineville, Ohio.

Annual PowerApps pricing changes

However, pricing isn’t the only problem, Christian said. Microsoft’s yearly overhaul of PowerApps fees also contributes to customer handwringing over costs.

“Select [a pricing model] and stick with it,” he said. “I’m OK with change; we’ll manage it and figure it out. It’s the repetitive changes that bug me.”

Microsoft began restricting PowerApps access to outside data sources earlier this year, putting into effect changes announced last fall. The new policy required users to purchase a special PowerApps plan to connect to popular business applications such as Salesforce Chatter, GotoMeeting and Oracle Database. The coming changes as presented earlier this summer would take that one step further by introducing per-app fees and closing loopholes that were available on a plan that previously cost $7 per user per month.

Matt Wade, VP of client services at H3 Solutions Inc., based in Manassas, Va., said customers should watch Microsoft’s official PowerApps blog for future information that might clarify costs and influence possible tweaks to the final pricing model. H3 Solutions is the maker of AtBot, a platform for developing bots for Microsoft’s cloud-based applications.

“People who are in charge of administering Office 365 and the Power Platform need to be hyper-aware of what’s going on,” Wade said. “Follow the blog, comment, provide feedback — and do it respectfully.”

Go to Original Article
Author:

‘Thank you for helping us make history’: Microsoft’s new London flagship store opens to the public

Microsoft’s new flagship store in London has opened its doors to the public for the first time, with people waiting hours to be among the first to set foot inside.

The first physical retail store for Microsoft in the UK, which is located on Oxford Circus and covers 22,000 square feet over three floors, was officially unveiled to the crowd at 11am.

Chris Capossela, Microsoft’s Chief Marketing Officer, Cindy Rose, UK Chief Executive, and Senior Store Manager John Carter welcomed the public by giving speeches in front of the doors on Regent Street.

Rose said the store was a “symbol of Microsoft’s enduring commitment to the UK”, which allows people to “experience the best the company has to offer”. “Thank you for helping us make history today,” she added.

People had started queuing along Regent Street from 7am to see Microsoft’s Surface devices, HoloLens, Xbox Gaming Lounge and sit in the McLaren Senna on the ground floor.

Store associates welcome customers to the store

One customer, Blair, had started queuing at 7:30am after travelling from Wiltshire by bus. “I’m a Microsoft fan but I especially love Xbox. I heard there would be a few games from [videogame event] E3 here,” he said. “I really want to have a go in the McLaren Senna.”

Denise, from Sutton, was interested in seeing the Surface devices. “I want to see the latest technology and products that Microsoft has in there. I might buy a new laptop today.”

Callum, from London, also wanted to sit in the McLaren Senna. “I play a lot of Forza, so I want to experience the car and the Xbox Gaming Lounge,” he said.

James, from Reading, wanted to see how the store could help businesses. “I’m excited to see what it’s like,” he said. “I want to see what they can offer businesses. The outside of the store looks incredible; it’s a masterpiece of architecture.”

Microsoft handed out free T-shirts and Xbox Game Pass codes to people in the queue, while the first 100 visitors to buy a Surface Pro 6 were also given a free limited edition Liberty Surface Type Cover.

Staff clap as customers enter the store

At 11am, the curtains in the store windows dropped to reveal excited store staff, dressed in red, green, yellow and blue shirts – the colours of Microsoft’s logo – jumping up and down and cheering.

The customers walked into a store with a modern feel, with lots of space and wood and glass surfaces. They were greeted by staff standing in front of a large video wall and Surface devices on tables, with the McLaren on their right and the HoloLens mixed-reality headset to their left. A wooden spiral staircase or lifts took them to the first floor, where they could play the latest Xbox and PC titles in high-quality gaming chairs and professional pods in the Gaming Lounge, purchase third-party laptops and accessories and get tech support, trainings, repairs and advice from the Answer Desk, or go to the Community Theatre where coding workshops were taking place. Visitors could create their own personalised Surface Type Cover with Surface Design Lab, featuring a range of designs that can be etched directly onto the cover. They also took photos in the Selfie Area.

The enterprise area on the second floor is a place to support, train and grow businesses no matter where they are on their digital transformation journey. From small companies and educational institutions to enterprise customers, the Product Advisors and Cloud Technical Experts will help customers discover, deploy and use Microsoft 365 and other resources to solve business challenges such as AI, data security, collaboration and workplace efficiencies. This floor also contains an area for hosting events, as well as meeting rooms and a Showcase space for demonstrating how customers, including Carlsberg and Toyota, are digitally transforming.

It is also the most accessible store Microsoft has ever opened, with store associates collectively speaking 45 languages, buttons to open doors, lower desks to help those in wheelchairs and Xbox Adaptive Controllers available for gamers with restricted movement.

Tags: , , , , ,

Go to Original Article
Author: Tracy Ith