Tag Archives: Server

Use a Windows Server 2019 domain controller or go to Azure?

Transcript – Use a Windows Server 2019 domain controller or go to Azure?

In this video, I will show you how to set up a domain controller in Windows Server 2019.

I’m logged into the Windows Server 2019 desktop. I’m going to go ahead and open Server Manager.

The process of setting up a domain controller is really similar to what you had in the previous Windows Server version.

Go up to Manage and select Add roles and features. This launches the wizard.

Click Next to bypass the Before you begin screen. I’m taken to the Installation type menu. I’m prompted to choose Role-based or feature-based installation or Remote Desktop Services installation. Choose the role-based or feature-based installation option and click Next.

I’m prompted to select my server from the pool. There’s only one server in here. This is the server that will become my domain controller. One thing I want to point out is to look at the operating system. This is Windows Server 2019 Datacenter edition; in a few minutes, you’ll see why I’m pointing this out. Click Next.

At the Server Roles menu, there are two roles that I want to install: Active Directory Domain Services and the DNS roles. Select the checkbox for Active Directory Domain Services. When I select that checkbox, I’m prompted to add some additional features. I’ll go ahead and select the Add Features button.

I’m also going to select the DNS Server checkbox and, once again, click on Add Features. Click Next.

Click Next on the Features menu. Click Next again on the AD DS menu. Click Next on the DNS menu.

I’m taken to the confirmation screen. It’s a good idea to take a moment and just review everything to make sure that it appears correct. Click Install. After a few minutes, the installation completes.

I should point out that the server was provisioned ahead of time with a static IP address. If you don’t do that, then you’re going to get a warning message during the installation wizard. Click Close.

The next thing that we need to do is to configure this to act as a domain controller. Click on the notifications icon. You can see there is a post-deployment configuration task that’s required. In this case, we need to promote the server to domain controller. Do that by clicking on the link, which opens Active Directory Domain Services configuration wizard.

I’m going to create a new forest, so I’ll click the Add a new forest button. I’m going to call this forest poseylab.com and click Next.

On the domain controller options screen, you’ll notice that the forest functional level is set to Windows Server 2016. There is no Windows Server 2019 option — at least, not yet. That’s the reason that I pointed out earlier that we are indeed running on Windows Server 2019. Leave this set to Windows Server 2016. Leave the default selections on the domain controller capabilities. I need to enter and confirm a password, so I’ll do that and click Next.

Click Next again on the DNS options screen.

The NetBIOS domain name is populated automatically. Click Next.

Go with the default paths for AD DS database, logs and SYSVOL. Click Next.

Everything on the Review options screen appears to be correct, so click Next.

Windows will do a prerequisites check. We have a couple of warnings, but all the prerequisite checks completed successfully, so we can go ahead and promote the server to a domain controller. Click Install to begin the installation process.

After a few minutes, the Active Directory Domain Services and the DNS roles are configured. Both are listed in Server Manager.

Let’s go ahead and switch over to a Windows 10 machine and make sure that we can connect that machine to the domain. Click on the Start button and go to Settings, then go to Accounts. I’ll click on Access work or school then Connect. I’ll choose the option Join this device to a local Active Directory domain. I’m prompted for the domain name, which is poseylab.com. Click Next.

I’m prompted for the administrative name and password. I’m prompted to choose my account type and account name. Click Next and Restart now.

Once the machine restarts, I’m prompted to log into the domain. That’s how you set up an Active Directory domain controller in Windows Server 2019.

+ Show Transcript

Go to Original Article
Author:

Lighten up and install Exchange 2019 on Windows Server Core

One of the biggest changes in Exchange Server 2019 from previous versions of the messaging platform is Microsoft supports — and recommends — deployments on Server Core.

For those who are comfortable with this deployment model, the option to install Exchange 2019 on a server without a GUI is a great advance. You can still manage the system with the Exchange Admin Console from another computer, so you really don’t lose anything when you install Exchange this way. The upside to installing Exchange on a Server Core machine is a smaller attack surface with less resource overhead. For some IT shops, because Server Core has no GUI, it can present a challenge when troubleshooting issues.

This tutorial will explain how to install Exchange 2019 on Server Core in a lab environment instead of a production setting. The following instructions will work the same for either setting, but users new to Server Core should practice a few deployments in a lab before trying the deployment for real.

Getting started

For the sake of brevity, this tutorial does not cover the aspects related to the installation of the Server Core operating system — it is identical to other Windows Server build processes — and the standard Exchange Server sizing exercises and overall deployment planning.

After installing a new Server Core 2019 build, you see the logon screen in Figure 1.

Server Core logon screen
Figure 1. Instead of the usual Desktop Experience in the full Windows Server installation, the Server Core deployment shows a simple black logon screen.

Most of the setup work on the server will come from PowerShell. After logging in, load PowerShell with the following command:

Start PowerShell

Next, this server needs an IP address. To check the current configuration, use the following command:

Get-NetIPAddress

This generates the server’s IP address configuration for all its network interfaces.

IP address configuration
Figure 2. Use the Get-NetIPAddress cmdlet to see information about the network interfaces on the server.

Your deployment will have different information, so select an interface and use the New-NetIPAddress cmdlet to configure it. Your command should look something to the following:

New-NetIPAddress -InterfaceIndex {Number} -IPAddress {IP Address} -PrefixLength {Subnet mask length} -DefaultGateway {IP Address}

After setting up the network, change the computer name and join it to the domain:

Add-Computer -DomainName {Domain} -NewName {Server} -DomainCredential {Admin account}

Next, install the prerequisites for Exchange 2019. The following cmdlet adds the Window features we need:

Install-WindowsFeature Server-Media-Foundation, RSAT-ADDS

You can use the Exchange install wizard to add the other required Windows components or you can use the following PowerShell command to handle it:

Install-WindowsFeature Server-Media-Foundation, NET-Framework-45-Features, RPC-over-HTTP-proxy, RSAT-Clustering, RSAT-Clustering-CmdInterface, RSAT-Clustering-PowerShell, WAS-Process-Model, Web-Asp-Net45, Web-Basic-Auth, Web-Client-Auth, Web-Digest-Auth, Web-Dir-Browsing, Web-Dyn-Compression, Web-Http-Errors, Web-Http-Logging, Web-Http-Redirect, Web-Http-Tracing, Web-ISAPI-Ext, Web-ISAPI-Filter, Web-Metabase, Web-Mgmt-Service, Web-Net-Ext45, Web-Request-Monitor, Web-Server, Web-Stat-Compression, Web-Static-Content, Web-Windows-Auth, Web-WMI, RSAT-ADDS

Prepare to install Exchange 2019

The next step is to download Exchange Server 2019 and the required prerequisites to get the platform running. Be sure to check Microsoft’s prerequisites for Exchange 2019 mailbox servers on Windows Server 2019 Core from this link because they have a tendency to change over time. The Server Core 2019 deployment needs the following software installed from the Microsoft link:

  • .NET Framework 4.8 or later
  • Visual C++ Redistributable Package for Visual Studio 2012
  • Visual C++ Redistributable Package for Visual Studio 2013

Next, run the following PowerShell command to install the Media Foundation:

Install-WindowsFeature Server-Media-Foundation

Lastly, install the Unified Communications Managed API 4.0 from the following link.

To complete the installation process, reboot the server with the following command:

Restart-Computer -Force

Installing Exchange Server 2019

To proceed to the Exchange 2019 installation, download the ISO and mount the image:

Mount-DiskImage c:<FolderPath>ExchangeServer2019-x64.iso

Navigate to the mounted drive and start the Exchange setup with the following command:

.Setup.exe /m:install /roles:m /IAcceptExchangeServerLicenseTerms
Exchange Server 2019 installation
Figure 3. Mount the Exchange Server 2019 ISO as a drive, then start the unattended setup to start the installation.

The installation should complete with Exchange Server 2019 operating on Windows Server Core.

Managing Exchange Server 2019 on Server Core

Once you complete the installation and reboot the server, you’ll find the same logon screen as displayed in Figure 1.

This can be somewhat discomforting for an administrator who spent their whole career working with the standard Windows GUI interface. There isn’t much you can do to manage your Exchange Server from the command prompt.

Your first management option is to use PowerShell locally on this server. From the command prompt, enter:

Start PowerShell

From the PowerShell window, enter the command:

Add-PSSnapin Microsoft.Exchange.Management.PowerShell.SnapIn

You need to run this command each time to use PowerShell on the headless Exchange Server when you want to run the Exchange Management Shell. To streamline this process, you can add that cmdlet to your PowerShell profile so that the Exchange Management snap-in loads automatically when you start PowerShell on that server. To find the location of your PowerShell profile, just type $Profile in PowerShell. That file may not exist if you’ve never created it; to do this, open Notepad.exe and create a file with the name $Profile and enter that previous Add-PSSnapin command.

The more reasonable management option for your headless Exchange Server is to never log into the server locally. You can run the Exchange Admin Center from a workstation to remotely manage the Exchange 2019 deployment.

Go to Original Article
Author:

For Sale – /trade HP ML350 G6 server

Hi all I’m looking to offload this server it was used mainly as a Arma3, Dayz and teamspeak server, over the last year its just been sitting in a cupboard un used since I can’t sit on a computer due to getting major migraines now…… Its a HP ML350 ProLiant G6 Server
2 x E5620 Quad Core 2.4Ghz
46Gb DDR3
8 x 146Gb

It’s a complete work horse and will happily enjoy life being a always on 24/7 machine
It is the tower case model (easier on the ears) BUT can also be rack mounted

No idea what its worth so I’ll say £200 and being a server it does weigh a absolute ton and as I have no packing collection is preferred unless you can find a decent courier that transports safely without being packed.

Thanks

Go to Original Article
Author:

Update makes Storage Migration Service more cloud-friendly

In days of yore, when Microsoft released a new version of Windows Server, the features in its administrative tools remained fixed until the next major version, which could be three or four years. Today’s Microsoft no longer follows this glacial release cadence.

The PowerShell team drops previews every few weeks and plans to deliver a major version annually. The developers for the Windows Admin Center put out the 16th update in November 2019 since the April 2018 general availability release. Among the new features and refinements is more cloud-friendly functionality to one of its tools, the Storage Migration Service.

The Storage Migration Service is a feature in Windows Server 2019 designed to reduce the traditional headaches associated with moving unstructured data — such as Microsoft Word documents, Excel files and videos — to a newer file server either on premises or in the cloud. Some files come with a lot of baggage in the form of Active Directory memberships or specific share properties that can hamstring a manual migration.

Firing up robocopy and hoping everything copies to the new file server without issue has not typically gone well for administrators when complaints roll in from users about missing file or share permissions. And that’s just the typical experience when moving from one on-premises file server to another. The technical leap to get all that data and its associated properties into a file server in the cloud normally requires a team of experts to ensure a seamless transition.

That’s where version 1910 of the Windows Admin Center steps in. Microsoft developers tweaked the underlying functionality to account for potential configuration mishaps that would botch a file server migration to the cloud, such as insufficient space for the destination server. Windows Admin Center now comes with an option to create an Azure VM that handles the minutiae, such as installation of roles and domain join setup.

This video tutorial by contributor Brien Posey explains how to use the Storage Migration Service to migrate a Windows Server 2008 file server to a newer supported Windows Server version. The transcript of these instructions is below.

With Windows Server 2008 and 2008 R2 having recently reached the end of life, it’s important to transition away from those servers if you haven’t already done so.

In this video, I want to show you how to use the Storage Migration Service to transfer files from a Windows Server 2008 file server over to something newer, such as Windows Server 2019.

With the Windows Admin Center open, go to the Storage Migration Service tab. I’ve used Server Manager to install the Storage Migration Service and the Storage Migration Service proxy. I went into the Add Roles and Features and then added those. I’ve enabled the necessary firewall rules. Specifically, you need to allow SMB, netlogon service and WMI (Windows Management Instrumentation).

There are three steps involved in a storage migration. Step one is to create a job and inventory your servers. Step two is to transfer data from your own servers. Step three is to cut over to the new servers.

Let’s start with step one. The first thing we need to do is to create an inventory of our own server. Click on New job, and then I choose my source device. I have a choice between either Windows servers and clusters or Linux servers. Since I’m going to transfer data off of Windows Server 2008, I select Windows servers and clusters.

I have to give the job a name. I will call this 2008 and click OK.

Next, I provide a set of credentials and then I have to add a device to inventory. Click Add a device and then we could either enter the device name or find it with an Active Directory search. I’m going to search for Windows servers, which returns five results. The server legacy.poseylab.com is my Windows Server 2008 machine. I’ll select that and click Add.

The next thing is to select this machine and start scanning it to begin the inventory process. The scan succeeded and found a share on this machine.

Click Next and enter credentials for the destination server. We’re prompted to specify the destination server. I’m going to select Use an existing server or VM, click the Browse button and search for a Windows server. I’ll use a wildcard character as the server name to search Active Directory.

I’ve got a machine called FileServer.poseylab.com that’s the server that I’m going to use as my new file server. I’ll select that and click Add and then Scan, so now we see a list of everything that’s going to be transferred.

The C: volume on our old server is going to be mapped to the C: volume on our new server. We can also see which shares are going to be transferred. We’ve only got one share called Files in the C:Files path. It’s an SMB share with 55.5 MB of data in it. We will click the Include checkbox to select this particular share to be transferred.

Click Next and we can adjust some transfer settings. The first option is to choose a validation method for transmitted files. By default, no validation is used, but being that I’m transferring such a small amount of data, I will enable CRC64 validation. Next, we can set the maximum duration of the file transfer in minutes.

Next, we can choose what happens with users and groups; we have the option of renaming accounts with the same name, reusing accounts with the same name or not transferring users and groups. We can specify the maximum number of retries and the delay between retries in seconds. I’m going to go with the default values on those and click Next.

We validate the source and the destination device by clicking the Validate button to run a series of tests to make sure that you’re ready to do the transfer. The validation tests passed, so we’re free to start the transfer. Click Next.

This screen is where we start the transfer. Click on Start transfer to transfer all the data. After the transfer completes, we need to verify our credentials. We have a place to add our credentials for the source device and for the destination device. We will use the stored credentials that we used earlier and click Next.

We have to specify the network adapter on both the source and the destination servers. I’m going to choose the destination network adapter and use DHCP (Dynamic Host Configuration Protocol). I’m going to assign a randomly generated name to the old server after the cutover, so the new server will assume the identity of the old server. Click Next.

We’re prompted once again for the Active Directory credentials. I’m going to use the stored credentials and click Next.

We’re taken to the validation screen. The source device original name is legacy.poseylab.com and it’s going to be renamed to a random name. The destination server’s original name was fileserver.poseylab.com and is going to be renamed to legacy.poseylab.com, so the destination server is going to assume the identity of the source server once all of this is done. To validate this, click on the server and then click Validate. The check passed, so I’ll go ahead and click Next.

The last step in the process would be to perform the cutover. Click on Start cut over to have the new server assume the identity of the old server.

That’s how a migration from Windows Server 2008 to Windows Server 2019 works using the Storage Migration Service.

View All Videos
Go to Original Article
Author:

Essential components and tools of server monitoring

Though server capacity management is an essential part of data center operations, it can be a challenge to figure out which components to monitor and what tools are available. How you address server monitoring can change depending on what type of infrastructure you run within your data center, as virtualized architecture requirements differ from on-premises processing needs.

With the capacity management tools available today, you can monitor and optimize servers in real time. Monitoring tools keep you updated on resource usage and automatically allocate resources between appliances to ensure continuous system uptime.

For a holistic view of your infrastructure, capacity management software should monitor these server components to some degree. Tracking these components can help you troubleshoot issues and predict any potential changes in processing requirements.

CPU. Because CPUs handle basic logic and I/O operations, as well as route commands for other components in the server, they’re always in use. High CPU usage can indicate an issue with the CPU, but more likely it’s a sign that the issue is with a connected component. Above 70% utilization applications on the server can become sluggish or stop responding.

Memory. High memory usage can result from multiple concurrent applications, but a faulty process that’s usually less resource-intensive may cause additional issues. The memory hardware component itself rarely fails, but you should investigate performance when its usage rates rise.

Storage area network. SAN component issues can occur at several points, including connection cabling, host bus adapters, switches and the storage servers themselves. A single SAN server can host data for multiple applications and often span multiple physical sites, which leads to significant business effects if any component fails.

Server disk capacity. Storage disks help alleviate storage issues and reduce bottlenecks for data storage with the right amount of capacity. Problems can arise when more users access the same application that uses a particular storage location, or if a resource-intensive process is located on a server not designed for the application. If you can’t increase disk capacity, you can monitor it and investigate when rates rise, so you can optimize future usage.

Storage I/O rates. You should also monitor storage I/O rates. Bottlenecks and high I/O rates can indicate a variety of issues, including CPU problems, disk capacity limitations, process bugs and hardware failure.

Physical temperatures of servers. Another vital component to monitor is server temperatures. Data centers are cooled to prevent any hardware component problems, but temperatures can increase for a variety of reasons: HVAC failure, internal server hardware failure (CPU, RAM or motherboard), external hardware failure (switches and cabling) or a software failure (firmware bug or application process issues).

OS, firmware and server applications. The entire server software stack must work together to ensure optimal usage (Basic I/O System, OS, hypervisors, drivers and applications.) Failed regular updates could lead to issues for the server, any hosted applications, faulty stakeholder user experience or downtime.

Streamline reporting with software tools

Most server monitoring software tracks and notifies you of any issues with servers in your technology stack. They include default and custom component monitoring, automated and manual optimization features, and standard and custom alerting options.

The software sector for server monitoring covers all types of architectures as well as required depth and breadth of data collection. Here is a shortlist of server capacity monitoring software for your data center.

SolarWinds Server & Application Monitor
SolarWinds’ software provides monitoring, optimization and diagnostic tools in a central hub. You can quickly identify which server resources are at capacity in real time, use historical reporting to track trends and forecast resource purchasing. Additional functions let you diagnose and fix virtual and physical storage capacity bottlenecks that affect application health and performance.

HelpSystems Vityl Capacity Management
Vityl Capacity Management is a comprehensive capacity management offering that makes it easy for organizations to proactively manage performance and do capacity planning in hybrid IT setups. It provides real-time monitoring data and historical trend reporting, which helps you understand the health and performance of your network over time.

BMC Software TrueSight Capacity Optimization
The TrueSight Capacity Optimization product helps admins plan, manage and optimize on-premises and cloud server resources through real-time and predictive features. It provides insights into multiple network types (physical, virtual or cloud) and helps you manage and forecast server usage.

VMware Capacity Planner
As a planning tool, VMware’s Capacity Planner can gather and analyze data about your servers and better forecast future usage. The forecasting and prediction functionality provides insights on capacity usage trends, as well as virtualization benchmarks based on industry performance standards.

Splunk App for Infrastructure
The Splunk App for Infrastructure (SAI) is an all-in-one tool that uses streamlined workflows and advanced alerting to monitor all network components. With SAI, you can create custom visualizations and alerts for better real-time monitoring and reporting through metric grouping and filtering based on your data center and reporting needs.

Go to Original Article
Author:

The Acid Test for Your Backup Strategy

For the first several years that I supported server environments, I spent most of my time working with backup systems. I noticed that almost everyone did their due diligence in performing backups. Most people took an adequate responsibility to verify that their scheduled backups ran without error. However, almost no one ever checked that they could actually restore from a backup — until disaster struck. I gathered a lot of sorrowful stories during those years. I want to use those experiences to help you avert a similar tragedy.

Successful Backups Do Not Guarantee Successful Restores

Fortunately, a lot of the problems that I dealt with in those days have almost disappeared due to technological advancements. But, that only means that you have better odds of a successful restore, not that you have a zero chance of failure. Restore failures typically mean that something unexpected happened to your backup media. Things that I’ve encountered:

  • Staff inadvertently overwrote a full backup copy with an incremental or differential backup
  • No one retained the necessary decryption information
  • Media was lost or damaged
  • Media degraded to uselessness
  • Staff did not know how to perform a restore — sometimes with disastrous outcomes

I’m sure that some of you have your own horror stories.

These risks apply to all organizations. Sometimes we manage to convince ourselves that we have immunity to some or all of them, but you can’t get there without extra effort. Let’s break down some of these line items.

People Represent the Weakest Link

We would all like to believe that our staff will never make errors and that the people that need to operate the backup system have the ability to do so. However, as a part of your disaster recovery planning, you must expect an inability to predict the state or availability of any individual. If only a few people know how to use your backup application, then those people become part of your risk profile.

You have a few simple ways to address these concerns:

  • Periodically test the restore process
  • Document the restore process and keep the documentation updated
  • Non-IT personnel need knowledge and practice with backup and restore operations
  • Non-IT personnel need to know how to get help with the application

It’s reasonable to expect that you would call your backup vendor for help in the event of an emergency that prevented your best people from performing restores. However, in many organizations without a proper disaster recovery plan, no one outside of IT even knows who to call. The knowledge inside any company naturally tends to arrange itself in silos, but you must make sure to spread at least the bare minimum information.

Technology Does Fail

I remember many shock and horror reactions when a company owner learned that we could not read the data from their backup tapes. A few times, these turned into grief and loss counselling sessions as they realized that they were facing a critical — or even complete — data loss situation. Tape has its own particular risk profile, and lots of businesses have stopped using it in favour of on-premises disk-based storage or cloud-based solutions. However, all backup storage technologies present some kind of risk.

In my experience, data degradation occurred most frequently. You might see this called other things, my favourite being “bit rot”. Whatever you call it, it all means the same thing: the data currently on the media is not the same data that you recorded. That can happen just because magnetic storage devices have susceptibilities. That means that no one made any mistakes — the media just didn’t last. For all media types, we can establish an average for failure rates. But, we have absolutely no guarantees on the shelf life for any individual unit. I have seen data pull cleanly off decade-old media; I have seen week-old backups fail miserably.

Unexpectedly, newer technology can make things worse. In our race to cut costs, we frequently employ newer ways to save space and time. In the past, we had only compression and incremental/differential solutions. Now, we have tools that can deduplicate across several backup sets and at multiple levels. We often put a lot of reliance on the single copy of a bit.

How to Test your Backup Strategy

The best way to identify problems is to break-test to find weaknesses. Leveraging test restores will help identity backup reliability and help you solve these problems. Simply, you cannot know that you have a good backup unless you can perform a good restore. You cannot know that your staff can perform a restore unless they perform a restore. For maximum effect, you need to plan tests to occur on a regular basis.

Some tools, like Altaro VM Backup, have built-in tools to make tests easy. Altaro VM Backup provides a “Test & Verify Backups” wizard to help you perform on-demand tests and a “Schedule Test Drills” feature to help you automate the process.

how to test and verify backups altaro

If your tool does not have such a feature, you can still use it to make certain that your data will be there when you need it. It should have some way to restore a separate or redirected copy. So, instead of overwriting your live data, you can create a duplicate in another place where you can safely examine and verify it.

Test Restore Scenario

In the past, we would often simply restore some data files to a shared location and use a simple comparison tool. Now that we use virtual machines for so much, we can do a great deal more. I’ll show one example of a test that I use. In my system, all of these are Hyper-V VMs. You’ll have to adjust accordingly for other technologies.

Using your tool, restore copies of:

  • A domain controller
  • A SQL server
  • A front-end server dependent on the SQL server

On the host that you restored those VMs to, create a private virtual switch. Connect each virtual machine to it. Spin up the copied domain controller, then the copied SQL server, then the copied front-end. Use the VM connect console to verify that all of them work as expected.

Create test restore scenarios of your own! Make sure that they match a real-world scenario that your organization would rely on after a disaster.


Go to Original Article
Author: Eric Siron

How to Run a Windows Failover Cluster Validation Test

Guest clustering describes an increasingly popular deployment configuration for Windows Server Failover Clusters where the entire infrastructure is virtualized. With a traditional cluster, the hosts are physical servers and run virtual machines (VMs) as their highly available workloads. With a guest cluster, the hosts are also VMs which form a virtual cluster, and they run additional virtual machines nested within them as their highly available workloads. Microsoft now recommends dedicating clusters for each class of enterprise workload, such as your Exchange Server, SQL Server, File Server, etc., because each application has different cluster settings and configuration requirements. Setting up additional clusters became expensive for organizations when they had to purchase and maintain more physical hardware. Other businesses wanted guest clustering as a cheaper test, demo or training infrastructure. To address this challenge, Microsoft Hyper-V supports “nested virtualization” which allows you to create virtualized hosts and run VMs from them, creating fully-virtualized clusters. While this solves the hardware problem, it has created new obstacles for backup providers as each type of guest cluster has special considerations.

Hyper-V Guest Cluster Configuration and Storage

Let’s first review the basic configuration and storage requirements for a guest cluster. Fundamentally a guest cluster has the same requirements as a physical cluster, including two or more hosts (nodes), a highly available workload or VM, redundant networks, and shared storage. The entire solution must also pass the built-in cluster validation tests. You should also force every virtualized cluster node to run on different physicals hosts so that if a single server fails, it will not bring down your entire guest cluster. This can be easily configured using Failover Clustering’s AntiAffinityClassNames or Azure Availability Sets, so in the event that you lose that physical server, the entire cluster will not fail. Some of the guest cluster requirements will also vary on the nested virtualized application which you are running, so always check for workload-specific requirements during your planning.

Shared storage used to be a requirement for all clusters because it allows the workload or VM to access the same data regardless of which node is running that workload. When the workload fails over to a different node, its services get restarted, then it accesses the same shared data which it was previously using. Windows Server 2012 R2 and later supports guest clusters with shared storage using a shared VHDX disk, iSCSI or virtual fibre channel. Microsoft added support for local DAS replication using storage spaces direct (S2D) within Windows Server 2016 and continued to improve S2D with the latest 2019 release.

For a guest cluster deployment guide, you can refer to the documentation provided by Microsoft to create a guest cluster using Hyper-V. If you want to do this in Microsoft Azure, then you can also follow enabling nested virtualization within Microsoft Azure.

Backup and Restore the Entire Hyper-V Guest Cluster

The easiest backup solution for guest clustering is to save the entire environment by protecting all the VMs in that set. This has almost-universal support by third party backup vendors such as Altaro, as it is essentially just protecting traditional virtual machines which have a relationship to each other. If you are using another VM as part of the set as an isolated domain controller, iSCSI target or file share witness, make sure it is backup up too.

A (guest) cluster-wide backup is also the easiest solution for scenarios where you wish to clone or redeploy an entire cluster for test, demo or training purposes by restoring it from a backup. If you are restoring a domain controller, make sure you bring this back online first. Note that if you are deploying copies of a VM, especially if one contains a domain controller, that any images have been Sysprepped to avoid conflicts by giving them new global identifiers. Also, use DHCP to get new IP addresses for all network interfaces. In this scenario, it is usually much easier to just deploy this cloned infrastructure in a full isolated environment so that the cloned domain controllers do not cause conflicts.

The downside to cluster-wide backup and restore is that you will lack the granularity to protect and recover a single workload (or item) running within the VM, which is why most admins will select another backup solution for guest clusters. Before you pick one of the alternative options, make sure that both your storage and backup vendor support this guest clustering configuration.

Backup and Restore a Guest Cluster using iSCSI or Virtual Fibre Channel

When guest clusters first became supported for Hyper-V, the most popular storage configurations were to use an iSCSI target or virtual fibre channel. iSCSI was popular because it was entirely Ethernet-based, which means that inexpensive commodity hardware could be used and Microsoft offered a free iSCSI Target server. Virtual fiber channel was also prevalent since it was the first type of SAN-based storage supported by Hyper-V guest clusters through its virtualized HBAs. Either solution works fine and most backup vendors support Hyper-V VMs running on these shared storage arrays. This is a perfectly acceptable solution for reliable backups and recovery if you are deploying a stable guest cluster. The main challenge was that in its earlier versions, Cluster Shared Volumes (CSV) disks and live migration had limited support by vendors. This meant that basic backups would work, but there were a lot of scenarios that would cause backups to fail, such as when a VM was live migrating between hosts. Most scenarios are supported in production, yet still make sure that your storage and backup vendors support and recommend it.

Backup and Restore a Guest Cluster using a Shared Virtual Hard Disk (VHDX) & VHD Set

Windows Server 2012 R2 introduced a new type of shared storage disk which was optimized for guest clustering scenarios, known as the shared virtual hard disk (.vhdx file), or Shared VHDX. This allowed multiple VMs to synchronously access a single data file which represented a shared disk (similar to a drive shared by an iSCSI Target). This disk could be used as a file share witness disk, or more commonly to store shared application data used by the workload running on the guest cluster. This Shared VHDX file could either be stored on a CSV disk or SMB file share (using a Scale-Out File Server).

This first release of a shared virtual hard disk had some limitations and was generally not recommended for production. The main criticisms were that backups were not reliable, and backup vendors were still catching up to support this new format. Windows Server 2016 addressed these issues by adding support for online resizing, Hyper-V Replica, and application-consistent checkpoints. These enhancements were released as a newer Hyper-V VHD Set (.vhds) file format. The VHD Set included additional file metadata which allowed each node to have a consistent view of that shared drive’s metadata, such as the block size and structure. Prior to this, nodes might have an inconsistent view of the Shared VHDX file structure which could cause backups to fail.

While VHD Sets was optimized to support guest clusters, there were inevitably some issues discovered which are documented by Microsoft Support. An important thing when using Shared VHDX / VHD Sets for your guest cluster is that all of your storage, virtualization, and clustering components are patched with any related hotfixes specific to your environment, including any from your storage and backup provider. Also, make sure you explicitly check that your ISVs support this updated file format and follow Microsoft’s best practices. Today this is the recommended deployment configuration for most new guest clusters.

Backup and Restore a Guest Cluster using Storage Spaces Direct (S2D)

Microsoft introduced another storage management technology in Windows Server 2016, which was improved in Windows Server 2019, known as Storage Spaces Direct (S2D). S2D was designed as a low-cost solution to support clusters without any requirement for shared storage. Instead, local DAS drives are synchronously replicated between cluster nodes to maintain a consistent state. This is certainly the easiest guest clustering solution to configure, however, Microsoft has announced some limitations in the current release (this link also includes a helpful video showing how to deploy a S2D cluster in Azure).

First, you are restricted to a 2-node or 3-node cluster only, and in either case you can only sustain the loss or outage of a single node. You also want to ensure that the disks have low latency and high performance, ideally using SSD drives or Azure’s Premium Storage managed disks. One of the major limitations still remains around backups as host-level virtual disk backups are currently not supported. If you deploy the S2D cluster, you are restricted to only taking backups from within the guest OS. Until this has been resolved and your backup vendor supports S2D, the safest option with the most flexibility will be to deploy a guest clustering using Shared VHDX / VHD Sets.

Summary

Microsoft is striving to improve guest clustering with each subsequent release. Unfortunately, this makes it challenging for third-party vendors to keep up with their support of the latest technology. It can be especially frustrating to admins when their preferred backup vendor has not yet added support for the latest version of Windows, and you should share this feedback on what you need with your ISVs. It is always a good best practice to select a vendor with close ties to Microsoft, as they get provided with early access to code and always aim to support the latest and greatest technology. The leading backup companies like Altaro are staffed by Microsoft MVPs and regularly consult with former Microsoft engineers such as myself, to support the newest technologies as quickly as possible. But always make sure that you do your homework before you deploy any of these guest clusters so you can pick the best configuration which is supported by your backup and storage provider.


Go to Original Article
Author: Symon Perriman

For Sale – Asus PG279Q & 7x 1TB SSDs (Crucial and Samsung)

6x Crucial MX500 1TB & 1x Samsung 860 EVO 1TB SSDs

These drives were bought to be used in my home dev server & as cache for my NAS.

1 of the Crucials is brand new in box, it was left as a cold spare.
All others have seen very light use… sub-2TB written to each disk, so not far off brand new.

Crucial – £60 posted each
Samsung – £70 posted each

Go to Original Article
Author:

HCI market grows as storage, servers shrink

Storage and server revenue keep declining sharply. Much of that money is going to public clouds but also to hyper-converged infrastructure systems that combine storage and servers.

Dell EMC, NetApp and Hewlett Packard Enterprise (HPE) all reported declines in storage revenue for their most recent quarters. IBM storage inched up 3% after quarters of decline. Pure Storage grew revenue 17% last quarter, but that’s a far cry from its growth in the 30% range as recently as mid-2018.

Naveen Chhabra, Forrester Research senior analyst for servers and operations, said the cloud and hyper-converged infrastructure (HCI) are taking a toll on traditional storage.

“The entire storage market is in trouble,” Chhabra said. “Every storage vendor has declining revenue. The only one that has shown growth is Pure. Storage investment is happening in the cloud, and the rest of storage is under tremendous pressure.”

Chhabra said he expects the HCI market will remain strong and continue to eat into storage and server sales. “There’s no stopping that,” he said. “Everything, including storage, eventually ends up deployed on the server.”

Dell and HPE sell servers and storage, and best show the trend from those technologies to hyper-converged.

The entire storage market is in trouble.
Naveen ChhabraSenior analyst, Forrester Research

HCI market remains extensive

Dell EMC’s storage revenue fell 3% to $4.5 billion last quarter, and servers and networking declined 19% to $4.3 billion. However, COO Jeff Clarke said hyper-converged revenue grew by more than 10%, mainly thanks to its VMware vSAN-powered VxRail product.

HPE storage declined 0.5% to $1.25 billion and compute fell 10% to $3 billion, but its SimpliVity HCI revenue ticked up by 6%.

At the same time, the leading HCI software vendors increased revenue.

Nutanix revenue grew 21% since last year to $347 million, and its billings increased 4% to $428 million. Dell-owned VMware’s vSAN bookings increased “in the mid-teens” according to the vendor. Both Nutanix and VMware claim they would have grown HCI revenue more, but they have switched to subscription licensing that decreases upfront revenue.

HPE actually picked up more HCI hardware customers through Nutanix, which now sells its software stack on HPE ProLiant servers as part of an OEM deal signed in 2019.

Nutanix said its DX Series consisting of Nutanix software on HPE servers accounted for 117 new customers in its first full quarter of the partnership. Nutanix CEO Dheeraj Pandey said those deals included a $4 million subscription deal with a financial services company, and $1 million deal with another financial services firm.

“HPE is becoming a pretty substantial portion” of Nutanix business, Pandey said. “It’s looking like a win-win for both sides.”

HPE is also offering Nutanix software-as-a-service through its GreenLake program, but it has not disclosed any numbers of those deals.

Pandey said while Nutanix sells HPE servers with its software, many deals come through recommendations from HPE. The Nutanix software stack includes something HPE’s SimpliVity HCI software lacks: a built-in hypervisor. Nutanix’s AHV hypervisor gives customers an alternative to VMware virtualization.

“We have big customers out there who like HPE, and they’d like to consume Nutanix software on HPE servers,” Pandey said. “We’re one of the few companies that deliver the full stack, including HCI, databases, end-user computing and automation. Our largest customers are AHV customers; they’re full-stack customers on Nutanix. We can run on top of Dell servers, HPE servers, our own white box servers, and we can take our software to the public cloud.”

Dell, VMware HCI market leaders

According to the most recent IDC hyper-converged market tracker for the third quarter of 2019, Dell led in systems revenue with a 35.1% share, followed by Nutanix at 13%, Cisco with 5.4%, HPE at 4.6% and Lenovo at 4.5%. IDC recognizes HCI software separately, with VMware at No. 1 with 38% share followed by Nutanix at 27.2%.

Dell still sells Nutanix software on PowerEdge servers as part of a deal that predates Dell’s acquisition of EMC (which included VMware) but focuses more on pushing VxRail systems with vSAN.

Chhabra said Dell recognized the HCI trend well before HPE and rode that to the HCI market lead. He said he sees Nutanix and HPE growing closer to help battle the Dell-VMware HCI combination.

“How does HPE compete with Dell plus VMware?” he said. “Here comes a strong partner in Nutanix, which can give HPE a like-to-like competitor to Dell. Do you have a hypervisor, do you have infrastructure, do you have storage? That’s what the Dell-VMware combination is, and now HPE has that.”

Dell CFO Thomas Sweet said he expects HCI to continue as Dell EMC’s fastest growing storage segment through this year.

“We’ve had great success with our VxRail product,” Sweet said on Dell’s earnings call last week. “We’ve seen softness in the core [storage] array business. That infrastructure space has been soft.”

Go to Original Article
Author:

How to fortify your virtualized Active Directory design

Active Directory is much more than a simple server role. It has become the single sign-on source for most, if not all, of your data center applications and services. This access control covers workstation logins and extends to clouds and cloud services.

Since AD is such a key part of many organizations, it is critical that it is always available and has the resiliency and durability to match business needs. Microsoft had enough foresight to set up AD as a distributed platform that can continue to function — without much or, in some cases, no interruption in services — even if parts of the system went offline. This was helpful when AD nodes were still physical servers that were often spread across multiple racks or data centers to avoid downtime. So, the question now becomes, what’s the right way to virtualize Active Directory design?

Don’t defeat the native AD distributed abilities

Active Directory is a distributed platform, so virtualizing it will hinder the native distributed functionality of the software. AD nodes can be placed on different hosts and fail-over software will restart VMs if a host crashes, but what if your primary storage goes down? It’s one scenario you should not discount.

When you undertake the Active Directory design process for a virtualization platform, you must go beyond just a host failure and look at common infrastructure outages that can take out critical systems. One of the advantages of separate physical servers was the level of resiliency the arrangement provided. While we don’t want to abandon virtual servers, we must understand the limits and concerns associated with them and consider additional areas such as management clusters.

Management clusters are often slightly lower tier platforms — normally still virtualized — that only contain management servers, applications and infrastructure. This is where you would want to place a few AD nodes, so they are outside of the production environment they manage. The challenge with a virtualized management cluster is that it can’t be placed on the same physical storage location as production; this defeats the purpose of separation of duties. You can use more cost-effective storage platforms such as a virtual storage area network for shared storage or even local storage.

Remember, this is infrastructure and not core production, so IOPS should not be as much of an issue because the goal is resiliency, not performance. This means local drives and RAID groups should be able to provide the IOPS required.

How to keep AD running like clockwork

One of the issues with AD controllers in a virtualized environment is time drift.

All computers have clocks and proper timekeeping is critical to both the performance and security of the entire network. Most servers and workstations get their time from AD, which helps to keep everything in sync and avoids Kerberos security login errors.

These AD servers would usually get their time from a time source if they were physical or from the hosts if virtualized from them. The AD servers would then keep the time synchronized with the internal clock of the computer based on CPU cycles.

When you virtualize a server, it no longer has a set number of CPU cycles to base its time on. That means time can drift until it reaches out for an external time check to reset itself. But that time check can also be off since you might be unable to tell the passage of time until the next check, which compounds the issue. Time drift can become stuck in a nasty loop because the virtualization hosts often get their time from Active Directory.

Your environment needs an external time source that is not dependent on virtualization to keep things grounded. While internet time sources are tempting, having the infrastructure reach out for time checks might not be ideal. A core switch or other key piece of networking gear can offer a dependable time source that is unlikely to be affected by drift due to its hardware nature. You can then use this time source as the sync source for both the virtualization hosts and AD, so all systems are on the same time that comes from the same source.

Some people will insist on a single physical server in a virtualized data center for this reason. That’s an option, but one that is not usually needed. Virtualization isn’t something to avoid in Active Directory design, but it needs to be done with thought and planning to ensure the infrastructure can support the AD configuration. Management clusters are key to the separation of AD nodes and roles.

This does not mean that high availability (HA) rules for Hyper-V or VMware environments are not required. Both production and management environments should have HA rules to prevent AD servers from running on the same hosts.

Rules should be in place to ensure these servers restart first and have reserved resources for proper operations. Smart HA rules are easy to overlook as more AD controllers are added and the rules configuration is forgotten.

The goal is not to prevent outages from happening — that’s not possible. It is to have enough replicas and roles of AD in the right places so users won’t notice. You might scramble a little behind the scenes if a disruption happens, but that’s part of the job. The key is to keep customers moving along without them knowing about any of the issues happening in the background.

Go to Original Article
Author: