Tag Archives: Quick

For Sale – LG Ultragear 32GK850F 32 Inch LCD 144hz Gaming Monitor – Black.

LG Ultragear 32GK850F 32 Inch LCD 144hz Gaming Monitor – Black.

Price reduced as quick sale wanted, no dead pixels, no marks or scratches, immaculate like new, comes with box, power supply and display port cable, selling as have chosen a different size monitor, collection only, welcome to come and view.

Go to Original Article
Author:

For Sale – i5, 8GB ram,

Just a quick bump/info
You cannot beat the 3570k for overclocking value.
Mine ran comfortably at 4.6ghz with a single rad AIOCOOLER and backed it down to 4.2ghz which it runs at now with throttle management running mostly at 2.6ghz
When I get my arse in gear and build my i9.9820x-2080ti monster sitting in its boxes, I’ll be retiring mine.
*- 35 bucks for a quad core i5 that’ll do those speeds is remarkable!! -*
<<Glen>>

Go to Original Article
Author:

For Sale – i5, 8GB ram,

Just a quick bump/info
You cannot beat the 3570k for overclocking value.
Mine ran comfortably at 4.6ghz with a single rad AIOCOOLER and backed it down to 4.2ghz which it runs at now with throttle management running mostly at 2.6ghz
When I get my arse in gear and build my i9.9820x-2080ti monster sitting in its boxes, I’ll be retiring mine.
*- 35 bucks for a quad core i5 that’ll do those speeds is remarkable!! -*
<>

Go to Original Article
Author:

For Sale – i5, 8GB ram,

Just a quick bump/info
You cannot beat the 3570k for overclocking value.
Mine ran comfortably at 4.6ghz with a single rad AIOCOOLER and backed it down to 4.2ghz which it runs at now with throttle management running mostly at 2.6ghz
When I get my arse in gear and build my i9.9820x-2080ti monster sitting in its boxes, I’ll be retiring mine.
*- 35 bucks for a quad core i5 that’ll do those speeds is remarkable!! -*
<>

Go to Original Article
Author:

For Sale – Intel Pentium Gold G5400 @ 3.70GHz, 8GB DDR4 Ram, Intel UHD Graphics 610, 120GB SSD, 2TB HDD

Going by a quick Google search I would price them as follows:

PC without SSD & HDD – £130 posted
SSD & HDD – £40 posted

These prices are only applicable if the two offers above are accepted based on my pricing. I won’t ship one part out without the other, so unless both sell at the same time, I would still like the unit to go as a whole.

So what do you guys think?

Go to Original Article
Author:

For Sale – Gaming PC, i7 7700k, 16GB DDR4, GTX 1080ti, 240GB SSD, 1TB HDD

Quick note: These pictures show a GTX 1060 Strix. I will update them tomorrow in the light. Just to confirm the card in the machine is an ASUS ROG Strix GTX 1080ti Gaming 11G.

Here is an quick image to show the 1080ti strix installed. I’ll update tomorrow with better lighting. See the two PCIe power connectors and slightly thicker card.
[​IMG]

Personal delivery by me included. ~150 preferred but if you’re further ask me.
Pleeeeeeeeeeeeease for the love of satan don’t ask me to split

Specifications
Intel i7 7700k
ASUS ROG Strix GTX 1080ti Gaming 11G
Team Group Delta 16GB RGB 2666Mhz
ASUS Strix z270f Gaming RGB motherboard
Kingston A400 240GB SSD
Seagate 1TB HDD
NZXT x62 Kraken Liquid Cooler
NZXT H500i RGB Case
NZXT 5 port Internal USB Hub
NZXT HUE+ Advance Lighting Controller
EZDIY-FAB PCIe Vertical Flexible Cable Extension
Corsair CX850M 80+ Bronze PSU
TP-Link N900 Dual Band WiFi Card
2x NZXT 140mm intake fans (front x62 kraken)
1x NZXT AER RGB 140mm case fans (top)
1x NZXT AER RGB 120mm case fans (rear)
Windows 10 Pro (activated with key)

£1250

[​IMG] [​IMG] [​IMG] [​IMG] [​IMG] [​IMG]

Price and currency: 1250
Delivery: Delivery cost is included within my country
Payment method: bt / cash
Location: Birmingham
Advertised elsewhere?: Advertised elsewhere
Prefer goods collected?: I have no preference

______________________________________________________
This message is automatically inserted in all classifieds forum threads.
By replying to this thread you agree to abide by the trading rules detailed here.
Please be advised, all buyers and sellers should satisfy themselves that the other party is genuine by providing the following via private conversation to each other after negotiations are complete and prior to dispatching goods and making payment:

  • Landline telephone number. Make a call to check out the area code and number are correct, too
  • Name and address including postcode
  • Valid e-mail address

DO NOT proceed with a deal until you are completely satisfied with all details being correct. It’s in your best interest to check out these details yourself.

Go to Original Article
Author:

How HR can help with a digital divide in the U.S.

Technology is often a solution to modern HR challenges, but HR executive Robin Schooling was quick to point out that tech can also be a problem for employees without equal access or related skills.

For example, according to a Purdue University report released earlier this year, “Job and establishment growth between 2010 and 2015 was substantially lower in [U.S.] counties with the highest digital divide.”

Schooling, who is vice president of HR at Hollywood Casino in Baton Rouge, La., will speak about the digital divide in the U.S. at the forthcoming HR Technology Conference in Las Vegas. SearchHRSoftware is the media partner for the conference.

In a preview of her remarks, Schooling outlined the challenges and explained why her three-person HR team sometimes takes an old-school, hands-on approach to everything from training to benefits enrollment.

This interview was edited lightly for brevity and clarity.

When you talk about the digital divide in the U.S., what do you mean?

Robin Schooling: The tendency is to think, as we automate more and more things, that people and job seekers are just going to go right along with all of it. We think that all job seekers or employees have the same level of technology at their disposal. We think the knowledge base is there and that they have the sophistication to go along with the online journey, from job searching to applying for jobs to the onboarding programs. And then, once they’re in-house, they’re ready for any online training.

Robin Schooling, vice president of HR, Hollywood CasinoRobin Schooling

We’re sort of assuming everybody in the workforce is at a desk in a high-rise and that technical knowledge is at [their] disposal. There is an entire group of workers and industries that I fear are getting left further and further behind. It’s the technology haves and have-nots.

How does email play into the digital divide in the U.S.?

Schooling: In my particular world, I have less than 20% of my employees who have work email or access to network drives. They are not at a desk; they are on the floor in front of customers, and they don’t use technology day to day in their jobs. So, they don’t have company email addresses.

If you go back further, a lot of my job applicants don’t have personal email addresses. I probably have, on average, two people a week we talk to who apply for a job using somebody else’s email address. We discover when get hold of them they never got our email and that the person whose email address they used ‘didn’t let me know it came.’ Or, people make up email addresses.

As we talk to people, we find out there truly are people who just don’t have email. It’s not an age thing. I see people who are 22, and I see people who are 70. We get a lot of calls.

[When we get a resume emailed to us,] we have auto reply, but sometimes it goes to a spam folder, and they don’t know how to check a spam folder. We have applicants who don’t have desktops or tablets. Their phone is it. And they don’t know how to navigate email addresses even if they have email. Instead, we get a lot of calls. It’s like 1989.

So, we’ve tried things with texting. We’ve tried a cellphone call. But it gets back to the low-wage, entry-level worker with a pay-as-you-go phone. They tried to apply, but we can’t reach them on it. It’s a challenge. I don’t know the answer to the problem. But we have to look at finding multiple ways to connect with people on the applicant side.

There’s a digital divide in the U.S., but change has to start somewhere. What are you doing in the office to help the tech have-nots?

Schooling: As we have automated, we enhanced some of the offerings through the system we have for our employees, because we don’t have folks who sit at a desk. We can put things out in the cloud to our employee self-service portal, but we’re still struggling with employees getting access if they’re doing it through their phone.

We have banks of computers in the break room, but many employees have challenges accessing them. They don’t know how to use a keyboard or a mouse. That still exists. Everybody doesn’t work on the East or West Coasts, and we’re not all on Slack.

And it’s not just the tech providers [that contribute to the digital divide in the U.S.]; it’s the HR service providers. Every year since I’ve been here, as part of our wellness initiatives, we have a third party come in and do biometric screenings. You get a checkup and sit with a nurse practitioner who logs you in to your account. We do this so that we get you into this wellness tracker for follow-up steps. In order to do that, you have to set up your own account, and that requires an email address and two-factor authentication.

Here’s the challenge: I’ve got a good 30 employees that do not have a personal email address. They may have access to their mothers’ or their wives’ or their sisters’ [email addresses]. I told the vendor that people coming in can’t set up an account because they do not have a personal email, or they are using a family member’s [email] who is not there to do two-factor authentication. How do we do that? How do we serve those people? The provider didn’t quite believe me.

We know the people that don’t have email. How do we solve this? As an HR team, what we do for our population is we try to help people as much as we can to set up accounts. We spend quite a bit of time amongst the team going to Gmail to set up an account and set it up on their phone for them. We try to help them create passwords and show them how to remember where it’s stored. It’s a challenge to do this one-on-one to help our folks as much as we can, but I think it’s important.

In my particular world, I have less than 20% of my employees who have work email or access to network drives.
Robin Schoolingvice president of HR, Hollywood Casino

It’s like old-school HR. It’s very hands-on and in your face. If you have a small or midsize business and your workforce is all in one place, what you can do there is kind of what we do. It’s handholding and bringing along one person at a time. Sometimes, it’s even just stopping and asking: If we are going to communicate something or expect an employee to go to a website to accomplish something for a job, are they equipped to do that?

When we think about the digital divide in the U.S., what can be done to help narrow it?

Schooling: This is an issue that worries me. We are just getting further and further away from thinking about those people who need to find jobs or are working hard, but are sitting in companies that don’t realize that perhaps there are folks being left behind. These are people who can’t do a really cool learning module on their phone.

I have people that are not hipsters — they have a flip phone, but don’t have a data plan. And if there is Wi-Fi, they need to have someone show them how to hook up. They can’t sit at home and do onboarding videos or learning snippets.

At the end of the day, I’m thinking about it from [different] sides. Are the vendors remembering when creating products to include the whole audience? Are HR practitioners aware that you need to do more than meet them where they are, but actually bring them with you?

And it’s important to remember it’s not a generational thing. Some folks coming out of school, college grads even, find themselves in this boat. They come from low-income families, and they’ve gotten higher ed, but they struggle with the access to the tools and the tech and the knowledge of how to use them.

How to Architect and Implement Networks for a Hyper-V Cluster

We recently published a quick tip article recommending the number of networks you should use in a cluster of Hyper-V hosts. I want to expand on that content to make it clear why we’ve changed practice from pre-2012 versions and how we arrive at this guidance. Use the previous post for quick guidance; read this one to learn the supporting concepts. These ideas apply to all versions from 2012 onward.

Why Did We Abandon Practices from 2008 R2?

If you dig on TechNet a bit, you can find an article outlining how to architect networks for a 2008 R2 Hyper-V cluster. While it was perfect for its time, we have new technologies that make its advice obsolete. I have two reasons for bringing it up:

  • Some people still follow those guidelines on new builds — worse, they recommend it to others
  • Even though we no longer follow that implementation practice, we still need to solve the same fundamental problems

We changed practices because we gained new tools to address our cluster networking problems.

What Do Cluster Networks Need to Accomplish for Hyper-V?

Our root problem has never changed: we need to ensure that we always have enough available bandwidth to prevent choking out any of our services or inter-node traffic. In 2008 R2, we could only do that by using multiple physical network adapters and designating traffic types to individual pathways. Note: It was possible to use third-party teaming software to overcome some of that challenge, but that was never supported and introduced other problems.

Starting from our basic problem, we next need to determine how to delineate those various traffic types. That original article did some of that work. We can immediately identify what appears to be four types of traffic:

  • Management (communications with hosts outside the cluster, ex: inbound RDP connections)
  • Standard inter-node cluster communications (ex: heartbeat, cluster resource status updates)
  • Cluster Shared Volume traffic
  • Live Migration

However, it turns out that some clumsy wording caused confusion. Cluster communication traffic and Cluster Shared Volume traffic are exactly the same thing. That reduces our needs to three types of cluster traffic.

What About Virtual Machine Traffic?

You might have noticed that I didn’t say anything about virtual machine traffic above. Same would be true if you were working up a different kind of cluster, such as SQL. I certainly understand the importance of that traffic; in my mind, service traffic prioritizes above all cluster traffic. Understand one thing: service traffic for external clients is not clustered. So, your cluster of Hyper-V nodes might provide high availability services for virtual machine vmabc, but all of vmabc‘s network traffic will only use its owning node’s physical network resources. So, you will not architect any cluster networks to process virtual machine traffic.

As for preventing cluster traffic from squelching virtual machine traffic, we’ll revisit that in an upcoming section.

Fundamental Terminology and Concepts

These discussions often go awry over a misunderstanding of basic concepts.

  • Cluster Name Object: A Microsoft Failover Cluster has its own identity separate from its member nodes known as a Cluster Name Object (CNO). The CNO uses a computer name, appears in Active Directory, has an IP, and registers in DNS. Some clusters, such as SQL, may use multiple CNOs. A CNO must have an IP address on a cluster network.
  • Cluster Network: A Microsoft Failover Cluster scans its nodes and automatically creates “cluster networks” based on the discovered physical and IP topology. Each cluster network constitutes a discrete communications pathway between cluster nodes.
  • Management network: A cluster network that allows inbound traffic meant for the member host nodes and typically used as their default outbound network to communicate with any system outside the cluster (e.g. RDP connections, backup, Windows Update). The management network hosts the cluster’s primary cluster name object. Typically, you would not expose any externally-accessible services via the management network.
  • Access Point (or Cluster Access Point): The IP address that belongs to a CNO.
  • Roles: The name used by Failover Cluster Management for the entities it protects (e.g. a virtual machine, a SQL instance). I generally refer to them as services.
  • Partitioned: A status that the cluster will give to any network on which one or more nodes does not have a presence or cannot be reached.
  • SMB: ALL communications native to failover clustering use Microsoft’s Server Message Block (SMB) protocol. With the introduction of version 3 in Windows Server 2012, that now includes innate multi-channel capabilities (and more!)

Are Microsoft Failover Clusters Active/Active or Active/Passive?

Microsoft Failover Clusters are active/passive. Every node can run services at the same time as the other nodes, but no single service can be hosted by multiple nodes. In this usage, “service” does not mean those items that you see in the Services Control Panel applet. It refers to what the cluster calls “roles” (see above). Only one node will ever host any given role or CNO at any given time.

How Does Microsoft Failover Clustering Identify a Network?

The cluster decides what constitutes a network; your build guides it, but you do not have any direct input. Any time the cluster’s network topology changes, the cluster service re-evaluates.

First, the cluster scans a node for logical network adapters that have IP addresses. That might be a physical network adapter, a team’s logical adapter, or a Hyper-V virtual network adapter assigned to the management operating system. It does not see any virtual NICs assigned to virtual machines.

For each discovered adapter and IP combination on that node, it builds a list of networks from the subnet masks. For instance, if it finds an adapter with an IP of 192.168.10.20 and a subnet mask of 255.255.255.0, then it creates a 192.168.10.0/24 network.

The cluster then continues through all of the other nodes, following the same process.

Be aware that every node does not need to have a presence in a given network in order for failover clustering to identify it; however, the cluster will mark such networks as partitioned.

What Happens if a Single Adapter has Multiple IPs?

If you assign multiple IPs to the same adapter, one of two things will happen. Which of the two depends on whether or not the secondary IP shares a subnet with the primary.

When an Adapter Hosts Multiple IPs in Different Networks

The cluster identifies networks by adapter first. Therefore, if an adapter has multiple IPs, the cluster will lump them all into the same network. If another adapter on a different host has an IP in one of the networks but not all of the networks, then the cluster will simply use whichever IPs can communicate.

As an example, see the following network:

The second node has two IPs on the same adapter and the cluster has added it to the existing network. You can use this to re-IP a network with minimal disruption.

A natural question: what happens if you spread IPs for the same subnet across different existing networks? I tested it a bit and the cluster allowed it and did not bring the networks down. However, it always had the functional IP pathway to use, so that doesn’t tell us much. Had I removed the functional pathways, then it would have collapsed the remaining IPs into an all-new network and it would have worked just fine. I recommend keeping an eye on your IP scheme and not allowing things like that in the first place.

When an Adapter Hosts Multiple IPs in the Same Network

The cluster will pick a single IP in the same subnet to represent the host in that network.

What if Different Adapters on the Same Host have an IP in the Same Subnet?

The same outcome occurs as if the IPs were on the same adapter: the cluster picks one to represent the cluster and ignores the rest.

The Management Network

All clusters (Hyper-V, SQL, SOFS, etc.) require a network that we commonly dub Management. That network contains the CNO that represents the cluster as a singular system. The management network has little importance for Hyper-V, but external tools connect to the cluster using that network. By necessity, the cluster nodes use IPs on that network for their own communications.

The management network will also carry cluster-specific traffic. More on that later.

Note: Replica uses a management network.

Cluster Communications Networks (Including Cluster Shared Volume Traffic)

A cluster communications network will carry:

  • Cluster heartbeat information. Each node must hear from every other node within a specific amount of time (1 second by default). If it does not hear from a minimum of nodes to maintain quorum, then it will begin failover procedures. Failover is more complicated than that, but beyond the scope of this article.
  • Cluster configuration changes. If any configuration item changes, whether to the cluster’s own configuration or the configuration or status of a protected service, the node that processes the change will immediately transmit to all of the other nodes so that they can update their own local information store.
  • Cluster Shared Volume traffic. When all is well, this network will only carry metadata information. Basically, when anything changes on a CSV that updates its volume information table, that update needs to be duplicated to all of the other nodes. If the change occurs on the owning node, less data needs to be transmitted, but it will never be perfectly quiet. So, this network can be quite chatty, but will typically use very little bandwidth. However, if one or more nodes lose direct connectivity to the storage that hosts a CSV, all of its I/O will route across a cluster network. Network saturation will then depend on the amount of I/O the disconnected node(s) need(s).

Live Migration Networks

That heading is a bit of misnomer. The cluster does not have its own concept of a Live Migration network per se. Instead, you let the cluster know which networks you will permit to carry Live Migration traffic. You can independently choose whether or not those networks can carry other traffic.

Other Identified Networks

The cluster may identify networks that we don’t want to participate in any kind of cluster communications at all. iSCSI serves as the most common example. We’ll learn how to deal with those.

Architectural Goals

Now we know our traffic types. Next, we need to architect our cluster networks to handle them appropriately. Let’s begin by understanding why you shouldn’t take the easy route of using a singular network. A minimally functional Hyper-V cluster only requires that “management” network. Stopping there leaves you vulnerable to three problems:

  • The cluster will be unable to select another IP network for different communication types. As an example, Live Migration could choke out the normal cluster hearbeat, causing nodes to consider themselves isolated and shut down
  • The cluster and its hosts will be unable to perform efficient traffic balancing, even when you utilize teams
  • IP-based problems in that network (even external to the cluster) could cause a complete cluster failure

Therefore, you want to create at least one other network. In the pre-2012 model we could designate specific adapters to carry specific traffic types. In the 2012 and later model, we simply create at least one more additional network to allow cluster communications but not client access. Some benefits:

  • Clusters of version 2012 or new will automatically employ SMB multichannel. Inter-node traffic (including Cluster Shared Volume data) will balance itself without further configuration work.
  • The cluster can bypass trouble on one IP network by choosing another; you can help by disabling a network in Failover Cluster Manager
  • Better load balancing across alternative physical pathways

The Second Supporting Network… and Beyond

Creating networks beyond the initial two can add further value:

  • If desired, you can specify networks for Live Migration traffic, and even exclude those from normal cluster communications. Note: For modern deployments, doing so typically yields little value
  • If you host your cluster networks on a team, matching the number of cluster networks to physical adapters allows the teaming and multichannel mechanisms the greatest opportunity to fully balance transmissions. Note: You cannot guarantee a perfectly smooth balance

Architecting Hyper-V Cluster Networks

Now we know what we need and have a nebulous idea of how that might be accomplished. Let’s get into some real implementation. Start off by reviewing your implementation choices. You have three options for hosting a cluster network:

  • One physical adapter or team of adapters per cluster network
  • Convergence of one or more cluster networks onto one or more physical teams or adapters
  • Convergence of one or more cluster networks onto one or more physical teams claimed by a Hyper-V virtual switch

A few pointers to help you decide:

  • For modern deployments, avoid using one adapter or team for a cluster network. It makes poor use of available network resources by forcing an unnecessary segregation of traffic.
  • I personally do not recommend bare teams for Hyper-V cluster communications. You would need to exclude such networks from participating in a Hyper-V switch, which would also force an unnecessary segregation of traffic.
  • The most even and simple distribution involves a singular team with a Hyper-V switch that hosts all cluster network adapters and virtual machine adapters. Start there and break away only as necessary.
  • A single 10 gigabit adapter swamps multiple gigabit adapters. If your hosts have both, don’t even bother with the gigabit.

To simplify your architecture, decide early:

  • How many networks you will use. They do not need to have different functions. For example, the old management/cluster/Live Migration/storage breakdown no longer makes sense. One management and three cluster networks for a four-member team does make sense.
  • The IP structure for each network. For networks that will only carry cluster (including intra-cluster Live Migration) communication, the chosen subnet(s) do not need to exist in your current infrastructureAs long as each adapter in a cluster network can reach all of the others at layer 2 (Ethernet), then you can invent any IP network that you want.

I recommend that you start off expecting to use a completely converged design that uses all physical network adapters in a single team. Create Hyper-V network adapters for each unique cluster network. Stop there, and make no changes unless you detect a problem.

Comparing the Old Way to the New Way (Gigabit)

Let’s start with a build that would have been common in 2010 and walk through our options up to something more modern. I will only use gigabit designs in this section; skip ahead for 10 gigabit.

In the beginning, we couldn’t use teaming. So, we used a lot of gigabit adapters:

There would be some variations of this. For instance, I would have added another adapter so that I could use MPIO with two iSCSI networks. Some people used Fiber Channel and would not have iSCSI at all.

Important Note: The “VMs” that you see there means that I have a virtual switch on that adapter and the virtual machines use it. It does not mean that I have created a VM cluster network. There is no such thing as a VM cluster network. The virtual machines are unaware of the cluster and they will not talk to it (if they do, they’ll use the Management access point like every other non-cluster system).

Then, 2012 introduced teaming. We could then do all sorts of fun things with convergence. My very least favorite:

This build takes teams to an excess. Worse, the management, cluster, and Live Migration teams will be idle almost all the time, meaning that this 60% of this host’s networking capacity will be generally unavailable.

Let’s look at something a bit more common. I don’t like this one either, but I’m not revolted by it either:

A lot of people like that design because, so they say, it protects the management adapter from problems that affect the other roles. I cannot figure out how they perform that calculus. Teaming addresses any probable failure scenarios. For anything else, I would want the entire host to fail out of the cluster. In this build, a failure that brought the team down but not the management adapter would cause its hosted VMs to become inaccessible because the node would remain in the cluster. That’s because the management adapter would still carry cluster heartbeat information.

My preferred design follows:

Now we are architected against almost all types of failure. In a “real-world” build, I would still have at least two iSCSI NICs using MPIO.

What is the Optimal Gigabit Adapter Count?

Because we had one adapter per role in 2008 R2, we often continue using the same adapter count in our 2012+ builds. I don’t feel that’s necessary for most builds. I am inclined to use two or three adapters in data teams and two adapters for iSCSI. For anything past that, you’ll need to have collected some metrics to justify the additional bandwidth needs.

10 Gigabit Cluster Network Design

10 gigabit changes all of the equations. In reasonable load conditions, a single 10 gigabit adapter moves data more than 10 times faster than a single gigabit adapter. When using 10 GbE, you need to change your approaches accordingly. First, if you have both 10GbE and gigabit, just ignore the gigabit. It is not worth your time. If you really want to use it, then I would consider using it for iSCSI connections to non-SSD systems. Most installations relying on iSCSI-connected spinning disks cannot sustain even 2 Gbps, so gigabit adapters would suffice.

Logical Adapter Counts for Converged Cluster Networking

I didn’t include the Hyper-V virtual switch in any of the above diagrams, mostly because it would have made the diagrams more confusing. However, I would use a Hyper-V team to host all of the logical adapters necessary. For a non-Hyper-V cluster, I would create a logical team adapter for each role. Remember that on a logical team, you can only have a single logical adapter per VLAN. The Hyper-V virtual switch has no such restrictions. Also remember that you should not use multiple logical team adapters on any team that hosts a Hyper-V virtual switch. Some of the behavior is undefined and your build might not be supported.

I would always use these logical/virtual adapter counts:

  • One management adapter
  • A minimum of one cluster communications adapter up to n-1, where n is the number of physical adapters in the team. You can subtract one because the management adapter acts as a cluster adapter as well

In a gigabit environment, I would add at least one logical adapter for Live Migration. That’s optional because, by default, all cluster-enabled networks will also carry Live Migration traffic.

In a 10 GbE environment, I would not add designated Live Migration networks. It’s just logical overhead at that point.

In a 10 GbE environment, I would probably not set aside physical adapters for storage traffic. At those speeds, the differences in offloading technologies don’t mean that much.

Architecting IP Addresses

Congratulations! You’ve done the hard work! Now you just need to come up with an IP scheme. Remember that the cluster builds networks based on the IPs that it discovers.

Every network needs one IP address for each node. Any network that contains an access point will need an additional IP for the CNO. For Hyper-V clusters, you only need a management access point. The other networks don’t need a CNO.

Only one network really matters: management. Your physical nodes must use that to communicate with the “real” network beyond. Choose a set of IPs available on your “real” network.

For all the rest, the member IPs only need to be able to reach each other over layer 2 connections. If you have an environment with no VLANs, then just make sure that you pick IPs in networks that don’t otherwise exist. For instance, you could use 192.168.77.0/24 for something, as long as that’s not a “real” range on your network. Any cluster network without a CNO does not need to have a gateway address, so it doesn’t matter that those networks won’t be routable. It’s preferred, in fact.

Implementing Hyper-V Cluster Networks

Once you have your architecture in place, you only have a little work to do. Remember that the cluster will automatically build networks based on the subnets that it discovers. You only need to assign names and set them according to the type of traffic that you want them to carry. You can choose:

  • Allow cluster communication (intra-node heartbeat, configuration updates, and Cluster Shared Volume traffic)
  • Allow client connectivity to cluster resources (includes cluster communication) and cluster communications (you cannot choose client connectivity without cluster connectivity)
  • Prevent participation in cluster communications (often used for iSCSI and sometimes connections to external SMB storage)

As much as I like PowerShell for most things, Failover Cluster Manager makes this all very easy. Access the Networks tree of your cluster:

I’ve already renamed mine in accordance with their intended roles. A new build will have “Cluster Network”, “Cluster Network 1”, etc. Double-click on one to see which IP range(s) it assigned to that network:

Work your way through each network, setting its name and what traffic type you will allow. Your choices:

  • Allow cluster network communication on this network AND Allow clients to connect through this network: use these two options together for the management network. If you’re building a non-Hyper-V cluster that needs access points on non-management networks, use these options for those as well. Important: The adapters in these networks SHOULD register in DNS.
  • Allow cluster network communication on this network ONLY (do not check Allow clients to connect through this network): use for any network that you wish to carry cluster communications (remember that includes CSV traffic). Optionally use for networks that will carry Live Migration traffic (I recommend that). Do not use for iSCSI networks. Important: The adapters in these networks SHOULD NOT register in DNS.
  • Do not allow cluster network communication on this network: Use for storage networks, especially iSCSI. I also use this setting for adapters that will use SMB to connect to a storage server running SMB version 3.02 in order to run my virtual machines. You might want to use it for Live Migration networks if you wish to segregate Live Migration from cluster traffic (I do not do or recommend that).

Once done, you can configure Live Migration traffic. Right-click on the Networks node and click Live Migration Settings:

Check a network’s box to enable it to carry Live Migration traffic. Use the Up and Down buttons to prioritize.

What About Traffic Prioritization?

In 2008 R2, we had some fairly arcane settings for cluster network metrics. You could use those to adjust which networks the cluster would choose as alternatives when a primary network was inaccessible. We don’t use those anymore because SMB multichannel just figures things out. However, be aware that the cluster will deliberately choose Cluster Only networks over Cluster and Client networks for inter-node communications.

What About Hyper-V QoS?

When 2012 first debuted, it brought Hyper-V networking QoS along with it. That was some really hot new tech, and lots of us dove right in and lost a lot of sleep over finding the “best” configuration. And then, most of us realized that our clusters were doing a fantastic job balancing things out all on their own. So, I would recommend that you avoid tinkering with Hyper-V QoS unless you have tried going without and had problems. Before you change QoS, determine what traffic needs to be attuned or boosted before you change anything. Do not simply start flipping switches, because the rest of us already tried that and didn’t get results. If you need to change QoS, start with this TechNet article.

Your thoughts?

Does your preferred network management system differ from mine? Have you decided to give my arrangement a try? How id you get on? Let me know in the comments below, I really enjoy hearing from you guys!

Ballistix Sport 2x4GB DDR4 2400Mhz XMP

upgraded to 16gb, so this stuff is for sale

never missed a beat.

https://www.amazon.co.uk/Ballistix-BLS2C4G4D240FSB-PC4-19200-288-Pin-Memory/dp/B00UFBZOLO?th=1

looking for quick sale @ £60 inc shipping.

Price and currency: £60
Delivery: Delivery cost is included within my country
Payment method: ppg bt
Location: Peterborough
Advertised elsewhere?: Advertised elsewhere
Prefer goods collected?: I have no preference…

Ballistix Sport 2x4GB DDR4 2400Mhz XMP