Tag Archives: maintain

What are Windows virtualization-based security features?

Windows administrators must maintain constant vigilance over their systems to prevent a vulnerability from crippling their systems or exposing data to threat actors. For shops that use Hyper-V, Microsoft offers another layer of protection through its virtualization-based security.

Virtualization-based security uses Hyper-V and the machine’s hardware virtualization features to isolate and protect an area of system memory that runs the most sensitive and critical parts of the OS kernel and user modes. Once deployed, these protected areas can guard other kernel and user-mode instances.

Virtualization-based security effectively reduces the Windows attack surface, so even if a malicious actor gains access to the OS kernel, the protected content can prevent code execution and the access of secrets, such as system credentials. In theory, these added protections would prevent malware attacks that use kernel exploits from gaining access to sensitive information.

Code examining, malware prevention among key capabilities

Virtualization-based security is a foundation technology and must be in place before adopting a range of advanced security features in Windows Server. One example is Hypervisor-Enforced Code Integrity (HVCI), which examines code — such as drivers — and ensures the kernel mode drivers and binaries are signed before they load into memory. Unsigned content gets denied, reducing the possibility of running malicious code.

Other advanced security capabilities that rely on virtualization-based security include Windows Defender Credential Guard, which prevents malware from accessing credentials, and the ability to create virtual trusted platform modules (TPMs) for shielded VMs.

In Windows Server 2019, Microsoft expanded its shielded VMs feature beyond the Windows platform to cover Linux workloads running on Hyper-V to prevent data leakage when the VM is both static and when it moves to another Hyper-V host.

New in Windows Server 2019 is a feature called host key attestation, which uses asymmetric key pairs to authenticate hosts covered by the Host Guardian Service in what is described as an easier deployment method by not requiring an Active Directory trust arrangement.

What are the virtualization-based security requirements?

Virtualization-based security has numerous requirements. It’s important to investigate the complete set of hardware, firmware and software requirements before adopting virtualization-based security. Any missing requirements may make it impossible to enable virtualization-based security and compromise system security features that depend on virtualization-based security support.

At the hardware level, virtualization-based security needs a 64-bit processor with virtualization extensions (Intel VT-x and AMD-V) and second-level address translation as Extended Page Tables or Rapid Virtualization Indexing. I/O virtualization must be supported through Intel VT-d or AMD-Vi. The server hardware must include TPM 2.0 or better.

System firmware must support the Windows System Management Mode Security Mitigations Table specification. Unified Extensible Firmware Interface must support memory reporting features such as the UEFI v2.6 Memory Attributes Table. Support for Secure Memory Overwrite Request v2 will inhibit in-memory attacks. All drivers must be compatible with HVCI standards.

Go to Original Article
Author:

How to Configure Failover Clusters to Work with DR & Backup

As your organization grows it is important to not only plan your high-availability solution to maintain service continuity, but also a disaster recovery solution in the event that the operations of your entire datacenter are compromised. High-availability (HA) allows your applications or virtual machines (VMs) to stay online by moving them to other server nodes in your cluster. But what happens if your region experiences a power outage, hurricane or fire?  What if your staff cannot safely access your datacenter? During times of crisis, your team will likely be focused on the well-being of their family or home, and not particularly interested in the availability of their company’s services. This is why it is important to not only protect against local crashes but to be able to move your workloads between datacenters or clouds, using disaster recovery (DR). Because you will need to have access to your data in both locations, you will need to make sure that the data is replicated and consistent in both locations.  The architecture of your DR solution will influence the replication solution you select.

Basic Architecture of a Multi-Site Failover Cluster

Basic Architecture of a Multi-Site Failover Cluster

This three-part blog post will first look at the design decisions to create a resilient multi-site infrastructure, then in future posts the different types of replicated storage you can use from third parties, along with Microsoft’s DFS-Replication, Hyper-V Replica, and Azure Site Recovery (ASR), and backup best practices for each.

Probably the first design decision will be the physical location of your second site.  In some cases, this may be your organization’s second office location, and you will not have any input.  Sometimes you will be able to select the datacenter of a service provider who allows cohosting.  When you do have a choice, first consider the disaster between these locations. Make sure that the two sites are on separate power grids.  Then consider what type of disasters your region is susceptible to, whether that is hurricanes, wildfires, earthquakes or even terrorist attacks.  If your primary site is along a coastline, then consider finding an inland location. Ideally, you should select a location that is far enough away from your primary site to avoid multi-site failure. Some organizations even select a site that is hundreds or thousands of miles away!

At first, selecting a cross-country location may sound like the best solution, but with added distance comes added latency.  If you wish to run different services from both sites (an active/active configuration), then be aware that the distance can cause performance issues as information needs to travel further across networks. If you decide to use synchronous replication, you may be limited to a few hundred miles or less to ensure that the data stays consistent.  For this reason, many organizations choose an active/passive configuration where the datacenter which is closer to the business or its customers will function as the primary site, and the secondary datacenter remains dormant until it is needed. This solution is easier to manage, yet more expensive as you have duplicate hardware which is mostly unused. Some organizations will use a third (or more) site to provide greater resiliency, but this adds more complexity when it comes to backup, replication and cluster membership (quorum).

Now that you have picked your sites, you should determine the optimal number of cluster nodes in each location.  You should always have at least two nodes at each site so that if a host crashes it can failover within the primary site before going to the DR site to minimize downtime.  You can configure local failover first through the cluster’s Preferred Owner setting.  The more nodes you have at each site, the more local failures you can sustain before moving to the secondary site.

Use Local Failover First before Cross-Site Failover

Use Local Failover First before Cross-Site Failover

It is also recommended that you have the same number of nodes at each site, ideally with identical hardware configurations.  This means that the performance of applications should be fairly consistent in both locations and it should reduce your maintenance costs.  Some organizations will allocate older hardware to their secondary site, which is still supported, but the workloads will be slower until they return to the primary site.  With this type of configuration, you should also configure automatic failback so that the workloads are restored to the faster primary site once it is healthy.

If you have enough hardware, then a best practice is to deploy at least three nodes at each site so that if you lose a single node and have local failover there will be less of a performance impact.  In the event that you lose one of your sites in a genuine disaster for an extended period of time, you can then evict all the nodes from that site, and still have a 3-node cluster running in a single site.  In this scenario, having a minimum of three nodes is important so that you can sustain the loss of one node while keeping the rest of the cluster online by maintaining its quorum.

If you are an experienced cluster administrator, you probably identified the problem with having two sites with an identical number nodes – maintaining cluster quorum.  Quorum is the cluster’s membership algorithm to ensure that there is exactly one owner of each clustered workload.  This is used to avoid a “split-brain” scenario when there is a partition between two sets of cluster nodes (such as between two sites), and two hosts independently run the same application, causing data inconsistency during replication.  Quorum works by giving each cluster node a vote, and a majority (51% or more) of voters must be in communication with each other to run all of the workloads.  So how is this possible with the recommendation of two balanced sites with three nodes each (6 total votes)?

The most common solution is to have an extra vote in a third site (7 total votes).  So long as either the primary or secondary site can communicate with that voter in the third site, that group of nodes will have a majority of votes and operate all the workloads.  For those who do not have the luxury of the third site, Microsoft allows you to place this vote inside the Microsoft Azure cloud, using a Cloud Witness Disk.  For a detailed understanding of this scenario, check out this Altaro blog about Understanding File Share Cloud Witness and Failover Clustering Quorum in the Microsoft Azure Cloud.

Use a File Share Cloud Witness to Maintain Quorum

Use a File Share Cloud Witness to Maintain Quorum

If you are familiar with designing a traditional Windows Server Failover Cluster, you know that redundancy of every hardware and software component is critical to eliminate any single point of failure.  With a disaster recovery solution, this concept is extended by also providing redundancy to your datacenters, including the servers, storage, and networks.  Between each site, you should have multiple redundant networks for cross-site communications.

You will next configure your shared storage at each site cross-site replication between the disks using either a third-party replication solution such as Altaro VM Backup or Microsoft’s Hyper-V Replica or Azure Site Recovery.  These configurations will be covered in the subsequent blog posts in this series.  Finally make sure that the entire multi-site cluster, including the replicated storage, does not fail any of the Cluster Validation Wizard tests.

Again, we’ll be covering more regarding this topic in future blog posts, so keep an eye out for them! Additionally, have you worked through multi-site failover planning for a failover cluster before? What things went well? What were the troubles you ran into? We’d love to know in the comments section below!


Go to Original Article
Author: Symon Perriman

Why You Should Be Using VM Notes in PowerShell

One of the nicer Hyper-V features is the ability to maintain notes for each virtual machine. Most of my VMs are for testing and I’m the only one that accesses them so I often will record items like an admin password or when the VM was last updated. Of course, you would never store passwords in a production environment but you might like to record when a VM was last modified and by whom. For single VM management, it isn’t that big a deal to use the Hyper-V manager. But when it comes to managing notes for multiple VMs PowerShell is a better solution.

In this post, we’ll show you how to manage VM Notes with PowerShell and I think you’ll get the answer to why you should be using VM Notes as well. Let’s take a look.

Using Set-VM

The Hyper-V module includes a command called Set-VM which has a parameter that allows you to set a note.

Displaying a Hyper-V VM note

As you can see, it works just fine. Even at scale.

Setting notes on multiple VMs

But there are some limitations. First off, there is no way to append to existing notes. You could get any existing notes and through PowerShell script, create a new value and then use Set-VM. To clear a note you can run Set-VM and use a value of “” for -Notes. That’s not exactly intuitive. I decided to find a better way.

Diving Deep into WMI

Hyper-V stores much in WMI (Windows Management Instrumentation). You’ll notice that many of the Hyper-V cmdlets have parameters for Cimsessions. But you can also dive into these classes which are in the root/virtualization/v2 namespace. Many of the classes are prefixed with msvm_.

Getting Hyper-V CIM Classes with PowerShell

After a bit of research and digging around in these classes I learned that to update a virtual machine’s settings, you need to get an instance of msvm_VirtualSystemSettingData, update it and then invoke the ModifySystemSettings() method of the msvm_VirtualSystemManagementService class. Normally, I would do all of this with the CIM cmdlets like Get-CimInstance and Invoke-CimMethod. If I already have a CIMSession to a remote Hyper-V host why not re-use it?

But there was a challenge. The ModifySystemSettings() method needs a parameter – basically a text version of the msvm_VirtualSystemSettingsData object. However, the text needs to be in a specific format. WMI has a way to format the text which you’ll see in a moment. Unfortunately, there is no technique using the CIM cmdlets to format the text. Whatever Set-VM is doing under the hood is above my pay grade. Let me walk you through this using Get-WmiObject.

First, I need to get the settings data for a given virtual machine.

This object has all of the virtual machine settings.

I can easily assign a new value to the Notes property.

$data.notes = “Last updated $(Get-Date) by $env:USERNAME”

At this point, I’m not doing much else than what Set-VM does. But if I wanted to append, I could get the existing note, add my new value and set a new value.

At this point, I need to turn this into the proper text format. This is the part that I can’t do with the CIM cmdlets.

To commit I need the system management service object.

I need to invoke the ModifySystemSettings() method which requires a little fancy PowerShell work.

Invoking the WMI method with PowerShell

A return value of 0 indicates success.

Verifying the change

The Network Matters

It isn’t especially difficult to wrap these steps into a PowerShell function. But here’s the challenge. Using Get-WmiObject with a remote server relies on legacy networking protocols. This is why Get-CimInstance is preferred and Get-WmiObject should be considered deprecated. So what to do? The answer is to run the WMI commands over a PowerShell remoting session. This means I can create a PSSession to the remote server using something like Invoke-Command. The connection will use WSMan and all the features of PowerShell remoting. In this session on the remote machine, I can run all the WMI commands I want. There’s no network connection required because it is local.

The end result is that I get the best of both worlds – WMI commands doing what I need over a PowerShell remoting session. By now, this might seem a bit daunting. Don’t worry. I made it easy.

Set-VMNote

In my new PSHyperVTools module, I added a command called Set-VMNote that does everything I’ve talked about. You can install the module from the PowerShell Gallery. If you are interested in the sausage-making, you can view the source code on Github at https://github.com/jdhitsolutions/PSHyperV/blob/master/functions/public.ps1. The function should make it easier to manage notes and supports alternate credentials.

Set-VMNote help

Now I can create new notes.

Creating new notes

Or easily append.

Appending notes

It might be hard to tell from this. Here’s what it looks like in the Hyper-V manager.

Verifying the notes

Most of the time the Hyper-V PowerShell cmdlets work just fine and meet my needs. But if they don’t, that’s a great thing about PowerShell – you can just create your own solution! And as you can probably guess, I will continue to create and share my own solutions right here.

Go to Original Article
Author: Jeffery Hicks

AWS Summit widens net with services for containers, devs

NEW YORK — AWS pledges to maintain its torrid pace of product and services innovations and continue to expand the breadth of both to meet customer needs.

“You decide how to build software, not us,” said Werner Vogels, Amazon vice president and CTO, in a keynote at the AWS Summit NYC event. “So, we need to give you a really big toolbox so you can get the tools you need.”

But AWS, which holds a healthy lead over Microsoft and Google in the cloud market, also wants to serve as an automation engine for customers, Vogels added.

“I strongly believe that in the future … you will only write business logic,” he said. “Focus on building your application, drop it somewhere and we will make it secure and highly available for you.”

Parade of new AWS services continues

Vogels sprinkled a series of news announcements throughout his keynote, two of which centered on containers. First, Amazon CloudWatch Container Insights, a service that provides container-level monitoring, is now in preview for monitoring clusters in Amazon Elastic Container Service and Amazon Fargate, in addition to Amazon EKS and Kubernetes. In addition, AWS for Fluent Bit, which serves as a centralized environment for container logging, is now generally available, he said.

Serverless compute also got some attention with the release of Amazon EventBridge, a serverless event bus to take in and process data across AWS’ own services and SaaS applications. AWS customers currently do this with a lot of custom code, so “the goal for us was to provide a much simpler programming model,” Vogels said. Initial SaaS partners for EventBridge include Zendesk, OneLogin and Symantec.

Focus on building your application, drop it somewhere and we will make it secure and highly available for you.
Werner VogelsCTO, AWS

AWS minds the past, with eye on the future

Most customers are moving away from the concept of a monolithic application, “but there are still lots of monoliths out there,” such as SAP ERP implementations that won’t go away anytime soon, Vogels said.

But IT shops with a cloud-first mindset focus on newer architectural patterns, such as microservices. AWS wants to serve both types of applications with a full range of instance types, containers and serverless functionality, Vogels said.

He cited customers such as McDonald’s, which has built a home-delivery system with Amazon Elastic Container Service. It can take up to 20,000 orders per second and is integrated with partners such as Uber Eats, Vogels said.

Vogels ceded the stage for a time to Steve Randich, executive vice president and CIO of the Financial Industry Regulatory Authority (FINRA), a nonprofit group that seeks to keep brokerage firms fair and honest.

FINRA moved wholesale to AWS and its systems now ingest up to 155 billion market events in a single day — double what it was three years ago. “When we hit these peaks, we don’t even know them operationally because the infrastructure is so elastic,” Randich said.

FINRA has designed the AWS-hosted apps to run across multiple availability zones. “Essentially, our disaster recovery is tested daily in this regard,” he said.

AWS’ ode to developers

Developers have long been a crucial component of AWS’ customer base, and the company has built out a string of tool sets aimed to meet a broad set of languages and integrated development environments (IDEs). These include AWS Cloud9, IntelliJ, Python, Visual Studio and Visual Studio Code.

VS Code is Microsoft’s lighter-weight, browser-based IDE, which has seen strong initial uptake. All the different languages in VS Code are now generally available, Vogels said to audience applause.

Additionally, AWS Cloud Development Kit (CDK) is now generally available with support for TypeScript and Python. AWS CDK makes it easier for developers to use high-level construct to define cloud infrastructure in code, said Martin Beeby, AWS principal developer evangelist, in a demo.

AWS seeks to keep the cloud secure

Vogels also used part of his AWS Summit talk to reiterate AWS’ views on security, as he did at the recent AWS re:Inforce conference dedicated to cloud security.

“There is no line in the sand that says, ‘This is good-enough security,'” he said, citing newer techniques such as automated reasoning as key advancements.

Werner Vogels, AWS CTO
Werner Vogels, CTO of AWS, on stage at the AWS Summit in New York.

Classic security precautions have become practically obsolete, he added. “If firewalls were the way to protect our systems, then we’d still have moats [around buildings],” Vogels said. Most attack patterns AWS sees are not brute-force front-door efforts, but rather spear-phishing and other techniques: “There’s always an idiot that clicks that link,” he said.

The full spectrum of IT, from operations to engineering to compliance, must be mindful of security, Vogels said. This is true within DevOps practices such as CI/CD from both an external and internal level, he said. The first involves matters such as identity access management and hardened servers, while the latter brings in techniques including artifact validation and static code analysis.

AWS Summit draws veteran customers and newcomers

The event at the Jacob K. Javits Convention Center drew thousands of attendees with a wide range of cloud experience, from FINRA to fledgling startups.

“The analytics are very interesting to me, and how I can translate that into a set of services for the clients I’m starting to work with,” said Donald O’Toole, owner of CeltTools LLC, a two-person startup based in Brooklyn. He retired from IBM in 2018 after 35 years.

AWS customer Timehop offers a mobile application oriented around “digital nostalgia,” which pulls together users’ photographs from various sources such as Facebook and Google Photos, said CTO Dmitry Traytel.

A few years ago, Timehop found itself in a place familiar to many startups: Low on venture capital and with no viable monetization strategy. The company created its own advertising server on top of AWS, dubbed Nimbus, rather than rely on third-party products. Once a user session starts, the system conducts an auction for multiple prominent mobile ad networks, which results in the best possible price for its ad inventory.

“Nimbus let us pivot to a different category,” Traytel said.

Go to Original Article
Author: