Category Archives: Hyper-V

6 Hardware Tweaks that will Skyrocket your Hyper-V Performance

17 Aug 2017 by Eric Siron     0     Hyper-V Articles

Few Hyper-V topics burn up the Internet quite like “performance”. No matter how fast it goes, we always want it to go faster. If you search even a little, you’ll find many articles with long lists of ways to improve Hyper-V’s performance. The less focused articles start with general Windows performance tips and sprinkle some Hyper-V-flavored spice on them. I want to use this article to tighten the focus down on Hyper-V hardware settings only. That means it won’t be as long as some others; I’ll just think of that as wasting less of your time.

1. Upgrade your system

I guess this goes without saying but every performance article I write will always include this point front-and-center. Each piece of hardware has its own maximum speed. Where that speed barrier lies in comparison to other hardware in the same category almost always correlates directly with cost. You cannot tweak a go-cart to outrun a Corvette without spending at least as much money as just buying a Corvette — and that’s without considering the time element. If you bought slow hardware, then you will have a slow Hyper-V environment.

Fortunately, this point has a corollary: don’t panic. Production systems, especially server-class systems, almost never experience demand levels that compare to the stress tests that admins put on new equipment. If typical load levels were that high, it’s doubtful that virtualization would have caught on so quickly. We use virtualization for so many reasons nowadays, we forget that “cost savings through better utilization of under-loaded server equipment” was one of the primary drivers of early virtualization adoption.

2. BIOS Settings for Hyper-V Performance

Don’t neglect your BIOS! It contains some of the most important settings for Hyper-V.

  • C States. Disable C States! Few things impact Hyper-V performance quite as strongly as C States! Names and locations will vary, so look in areas related to Processor/CPU, Performance, and Power Management. If you can’t find anything that specifically says C States, then look for settings that disable/minimize power management. C1E is usually the worst offender for Live Migration problems, although other modes can cause issues.
  • Virtualization support: A number of features have popped up through the years, but most BIOS manufacturers have since consolidated them all into a global “Virtualization Support” switch, or something similar. I don’t believe that current versions of Hyper-V will even run if these settings aren’t enabled. Here are some individual component names, for those special BIOSs that break them out:
    • Virtual Machine Extensions (VMX)
    • AMD-V — AMD CPUs/mainboards. Be aware that Hyper-V can’t (yet?) run nested virtual machines on AMD chips
    • VT-x, or sometimes just VT — Intel CPUs/mainboards. Required for nested virtualization with Hyper-V in Windows 10/Server 2016
  • Data Execution Prevention: DEP means less for performance and more for security. It’s also a requirement. But, we’re talking about your BIOS settings and you’re in your BIOS, so we’ll talk about it. Just make sure that it’s on. If you don’t see it under the DEP name, look for:
    • No Execute (NX) — AMD CPUs/mainboards
    • Execute Disable (XD) — Intel CPUs/mainboards
  • Second Level Address Translation: I’m including this for completion. It’s been many years since any system was built new without SLAT support. If you have one, following every point in this post to the letter still won’t make that system fast. Starting with Windows 8 and Server 2016, you cannot use Hyper-V without SLAT support. Names that you will see SLAT under:
    • Nested Page Tables (NPT)/Rapid Virtualization Indexing (RVI) — AMD CPUs/mainboards
    • Extended Page Tables (EPT) — Intel CPUs/mainboards
  • Disable power management. This goes hand-in-hand with C States. Just turn off power management altogether. Get your energy savings via consolidation. You can also buy lower wattage systems.
  • Use Hyperthreading. I’ve seen a tiny handful of claims that Hyperthreading causes problems on Hyper-V. I’ve heard more convincing stories about space aliens. I’ve personally seen the same number of space aliens as I’ve seen Hyperthreading problems with Hyper-V (that would be zero). If you’ve legitimately encountered a problem that was fixed by disabling Hyperthreading AND you can prove that it wasn’t a bad CPU, that’s great! Please let me know. But remember, you’re still in a minority of a minority of a minority. The rest of us will run Hyperthreading.
  • Disable SCSI BIOSs. Unless you are booting your host from a SAN, kill the BIOSs on your SCSI adapters. It doesn’t do anything good or bad for a running Hyper-V host but slows down physical boot times.
  • Disable BIOS-set VLAN IDs on physical NICs. Some network adapters support VLAN tagging through boot-up interfaces. If you then bind a Hyper-V virtual switch to one of those adapters, you could encounter all sorts of network nastiness.

3. Storage Settings for Hyper-V Performance

I wish the IT world would learn to cope with the fact that rotating hard disks do not move data very quickly. If you just can’t cope with that, buy a gigantic lot of them and make big RAID 10 arrays. Or, you could get a stack of SSDs. Don’t get six or so spinning disks and get sad that they “only” move data at a few hundred megabytes per second. That’s how the tech works.

Performance tips for storage:

  • Learn to live with the fact that storage is slow.
  • Remember that speed tests do not reflect real world load and that file copy does not test anything except permissions.
  • Learn to live with Hyper-V’s I/O scheduler. If you want a computer system to have 100% access to storage bandwidth, start by checking your assumptions. Just because a single file copy doesn’t go as fast as you think it should, does not mean that the system won’t perform its production role adequately. If you’re certain that a system must have total and complete storage speed, then do not virtualize it. The only way that a VM can get that level of speed is by stealing I/O from other guests.
  • Enable read caches
  • Carefully consider the potential risks of write caching. If acceptable, enable write caches. If your internal disks, DAS, SAN, or NAS has a battery backup system that can guarantee clean cache flushes on a power outage, write caching is generally safe. Internal batteries that report their status and/or automatically disable caching are best. UPS-backed systems are sometimes OK, but they are not foolproof.
  • Prefer few arrays with many disks over many arrays with few disks.
  • Unless you’re going to store VMs on a remote system, do not create an array just for Hyper-V. By that, I mean that if you’ve got six internal bays, do not create a RAID-1 for Hyper-V and a RAID-x for the virtual machines. That’s a Microsoft SQL Server 2000 design. This is 2017 and you’re building a Hyper-V server. Use all the bays in one big array.
  • Do not architect your storage to make the hypervisor/management operating system go fast. I can’t believe how many times I read on forums that Hyper-V needs lots of disk speed. After boot-up, it needs almost nothing. The hypervisor remains resident in memory. Unless you’re doing something questionable in the management OS, it won’t even page to disk very often. Architect storage speed in favor of your virtual machines.
  • Set your fibre channel SANs to use very tight WWN masks. Live Migration requires a hand off from one system to another, and the looser the mask, the longer that takes. With 2016 the guests shouldn’t crash, but the hand-off might be noticeable.
  • Keep iSCSI/SMB networks clear of other traffic. I see a lot of recommendations to put each and every iSCSI NIC on a system into its own VLAN and/or layer-3 network. I’m on the fence about that; a network storm in one iSCSI network would probably justify it. However, keeping those networks quiet would go a long way on its own. For clustered systems, multi-channel SMB needs each adapter to be on a unique layer 3 network (according to the docs; from what I can tell, it works even with same-net configurations).
  • If using gigabit, try to physically separate iSCSI/SMB from your virtual switch. Meaning, don’t make that traffic endure the overhead of virtual switch processing, if you can help it.
  • Round robin MPIO might not be the best, although it’s the most recommended. If you have one of the aforementioned network storms, Round Robin will negate some of the benefits of VLAN/layer 3 segregation. I like least queue depth, myself.
  • MPIO and SMB multi-channel are much faster and more efficient than the best teaming.
  • If you must run MPIO or SMB traffic across a team, create multiple virtual or logical NICs. It will give the teaming implementation more opportunities to create balanced streams.
  • Use jumbo frames for iSCSI/SMB connections if everything supports it (host adapters, switches, and back-end storage). You’ll improve the header-to-payload bit ratio by a meaningful amount.
  • Enable RSS on SMB-carrying adapters. If you have RDMA-capable adapters, absolutely enable that.
  • Use dynamically-expanding VHDX, but not dynamically-expanding VHD. I still see people recommending fixed VHDX for operating system VHDXs, which is just absurd. Fixed VHDX is good for high-volume databases, but mostly because they’ll probably expand to use all the space anyway. Dynamic VHDX enjoys higher average write speeds because it completely ignores zero writes. No defined pattern has yet emerged that declares a winner on read rates, but people who say that fixed always wins are making demonstrably false assumptions.
  • Do not use pass-through disks. The performance is sometimes a little bit better, but sometimes it’s worse, and it almost always causes some other problem elsewhere. The trade-off is not worth it. Just add one spindle to your array to make up for any perceived speed deficiencies. If you insist on using pass-through for performance reasons, then I want to see the performance traces of production traffic that prove it.
  • Don’t let fragmentation keep you up at night. Fragmentation is a problem for single-spindle desktops/laptops, “admins” that never should have been promoted above first-line help desk, and salespeople selling defragmentation software. If you’re here to disagree, you better have a URL to performance traces that I can independently verify before you even bother entering a comment. I have plenty of Hyper-V systems of my own on storage ranging from 3-spindle up to >100 spindle, and the first time I even feel compelled to run a defrag (much less get anything out of it) I’ll be happy to issue a mea culpa. For those keeping track, we’re at 6 years and counting.

4. Memory Settings for Hyper-V Performance

There isn’t much that you can do for memory. Buy what you can afford and, for the most part, don’t worry about it.

  • Buy and install your memory chips optimally. Multi-channel memory is somewhat faster than single-channel. Your hardware manufacturer will be able to help you with that.
  • Don’t over-allocate memory to guests. Just because your file server had 16GB before you virtualized it does not mean that it has any use for 16GB.
  • Use Dynamic Memory unless you have a system that expressly forbids it. It’s better to stretch your memory dollar farther than wring your hands about whether or not Dynamic Memory is a good thing. Until directly proven otherwise for a given server, it’s a good thing.
  • Don’t worry so much about NUMA. I’ve read volumes and volumes on it. Even spent a lot of time configuring it on a high-load system. Wrote some about it. Never got any of that time back. I’ve had some interesting conversations with people that really did need to tune NUMA. They constitute… oh, I’d say about .1% of all the conversations that I’ve ever had about Hyper-V. The rest of you should leave NUMA enabled at defaults and walk away.

5. Network Settings for Hyper-V Performance

Networking configuration can make a real difference to Hyper-V performance.

  • Learn to live with the fact that gigabit networking is “slow” and that 10GbE networking often has barriers to reaching 10Gbps for a single test. Most networking demands don’t even bog down gigabit. It’s just not that big of a deal for most people.
  • Learn to live with the fact that a) your four-spindle disk array can’t fill up even one 10GbE pipe, much less the pair that you assigned to iSCSI and that b) it’s not Hyper-V’s fault. I know this doesn’t apply to everyone, but wow, do I see lots of complaints about how Hyper-V can’t magically pull or push bits across a network faster than a disk subsystem can read and/or write them.
  • Disable VMQ on gigabit adapters. I think some manufacturers are finally coming around to the fact that they have a problem. Too late, though. The purpose of VMQ is to redistribute inbound network processing for individual virtual NICs away from CPU 0, core 0 to the other cores in the system. Current-model CPUs are fast enough that they can handle many gigabit adapters.
  • If you are using a Hyper-V virtual switch on a network team and you’ve disabled VMQ on the physical NICs, disable it on the team adapter as well. I’ve been saying that since shortly after 2012 came out and people are finally discovering that I’m right, so, yay? Anyway, do it.
  • Don’t worry so much about vRSS. RSS is like VMQ, only for non-VM traffic. vRSS, then, is the projection of VMQ down into the virtual machine. Basically, with traditional VMQ, the VMs’ inbound traffic is separated across pNICs in the management OS, but then each guest still processes its own data on vCPU 0. vRSS splits traffic processing across vCPUs inside the guest once it gets there. The “drawback” is that distributing processing and then redistributing processing causes more processing. So, the load is nicely distributed, but it’s also higher than it would otherwise be. The upshot: almost no one will care. Set it or don’t set it, it’s probably not going to impact you a lot either way. If you’re new to all of this, then you’ll find an “RSS” setting on the network adapter inside the guest. If that’s on in the guest (off by default) and VMQ is on and functioning in the host, then you have vRSS. woohoo.
  • Don’t blame Hyper-V for your networking ills. I mention this in the context of performance because your time has value. I’m constantly called upon to troubleshoot Hyper-V “networking problems” because someone is sharing MACs or IPs or trying to get traffic from the dark side of the moon over a Cat-3 cable with three broken strands. Hyper-V is also almost always blamed by people that just don’t have a functional understanding of TCP/IP. More wasted time that I’ll never get back.
  • Use one virtual switch. Multiple virtual switches cause processing overhead without providing returns. This is a guideline, not a rule, but you need to be prepared to provide an unflinching, sure-footed defense for every virtual switch in a host after the first.
  • Don’t mix gigabit with 10 gigabit in a team. Teaming will not automatically select 10GbE over the gigabit. 10GbE is so much faster than gigabit that it’s best to just kill gigabit and converge on the 10GbE.
  • 10x gigabit cards do not equal 1x 10GbE card. I’m all for only using 10GbE when you can justify it with usage statistics, but gigabit just cannot compete.

6. Maintenance Best Practices

Don’t neglect your systems once they’re deployed!

  • Take a performance baseline when you first deploy a system and save it.
  • Take and save another performance baseline when your system reaches a normative load level (basically, once you’ve reached its expected number of VMs).
  • Keep drivers reasonably up-to-date. Verify that settings aren’t lost after each update.
  • Monitor hardware health. The Windows Event Log often provides early warning symptoms, if you have nothing else.

Further reading

If you carry out all (or as many as possible) of the above hardware adjustments you will witness a considerable jump in your hyper-v performance. That I can guarantee. However, for those who don’t have the time, patience or prepared to make the necessary investment in some cases, Altaro has developed an e-book just for you. Find out more about it here: Supercharging Hyper-V Performance for the time-strapped admin.

Have any questions or feedback?

Leave a comment below!

Hyper-V Key-Value Pair Data Exchange Part 3: Linux

15 Aug 2017 by Eric Siron
Hyper-V Articles

Some time ago, I discovered uses for Hyper-V Key-Value Pair Data Exchange services and began exploiting them on my Windows guests. Now that I’ve started building Linux guests, I need similar functionality. This article covers the differences in the Linux implementation and includes version 1.0 of a program that allows you to receive, send, and delete KVPs.

For a primer on Hyper-V KVP Exchange, start with this article: Hyper-V Key-Value Pair Data Exchange Part 1: Explanation.

The second part of that series presented PowerShell scripts for interacting with Hyper-V KVP Exchange from both the host and the guest sides. The guest script won’t be as useful in the context of Linux. Even if you install PowerShell on Linux, the script won’t work because it reads and writes registry keys. It might still spark some implementation ideas, I suppose.

What is Hyper-V Key-Value Pair Data Exchange?

To save you a few clicks and other reading, I’ll give a quick summary of Hyper-V KVP Exchange.

Virtual machines are intended to be “walled gardens”. The host and guest should have limited ability to interact with each other. That distance sometimes causes inconvenience, but the stronger the separation, the stronger the security. Hyper-V’s KVP Exchange provides one method for moving data across the wall without introducing a crippling security hazard. Either “side” (host or guest) can “send” a message at any time. The other side can receive it — or ignore it. Essentially, they pass notes by leaving them stuck in slots in the “wall” of the “walled garden”.

KVP stands for “key-value pair”. Each of these messages consists of one text key and one text value. The value can be completely empty.

How is Hyper-V KVP Exchange Different on Linux?

On Windows guests, a service runs (Hyper-V Data Exchange Service) that monitors the “wall”. When the host leaves a message, this service copies the information into the guest’s Windows registry. To send a message to the host, you (or an application) create or modify a KVP within a different key in the Windows registry. The service then places that “note” in the “wall” where the host can pick it up. More details can be found in the first article in this series.

Linux runs a daemon that is the analog to the Windows service. It has slightly different names on different platforms, but I’ve been able to locate it on all of my distributions with
sudo service statusall | grep kvp. It may not always be running; more on that in a bit.

Linux doesn’t have a native analog to the Windows registry. Instead, the daemon maintains a set of files. It receives inbound messages from the host and places them in particular files that you can read (or ignore). You can write to one of the files. The daemon will transfer those messages up to the host.

On Windows, I’m not entirely certain of any special limits on KVP sizes. A registry key can be 16,384 characters and there is no hard-coded limit on value size. I have not tested how KVP Exchange handles these extents on Windows. However, the Linux daemon has much tighter constraints. A key can be no longer than 512 bytes. A value can be no longer than 2,048 bytes.

The keys are case sensitive on the host and on Linux guests. So, key “LinuxKey” is not the same as key “linuxkey”. Windows guests just get confused by that, but Linux handles it easily.

How does Hyper-V KVP Exchange Function on Linux?

As with Windows guests, Data Exchange must be enabled on the virtual machine’s properties:

Hyper-V KVP Exchange on Linux

The daemon must also be installed and running within the guest. Currently-supported versions of the Linux kernel contain the Hyper-V KVP framework natively, so several distributions ship with it enabled. As mentioned in the previous section, the exact name of the daemon varies. You should be able to find it with:
sudo service statusall | grep kvp. If it’s not installed, check your distribution’s instruction page on TechNet.

All of the files that the daemon uses for Hyper-V KVP exchange can be found in the /var/lib/hyperv folder. They are hidden, but you can view them with ls‘s -a parameter:

Hyper-V KVP exchange

Anyone can read any of these files. Only the root account has write permissions, but that can be misleading. Writing to any of the files that are intended to carry data from the host to the guest has no real effect. The daemon is always monitoring them and only it can carry information from the host side.

What is the Purpose of Each Hyper-V KVP Exchange File?

Each of the files is used for a different purpose.

  • .kvp_pool_0: When an administrative user or an application in the host sends data to the guest, the daemon writes the message to this file. It is the equivalent of HKLMSOFTWAREMicrosoftVirtual MachineExternal on Windows guests. From the host side, the related commands are ModifyKvpItems, AddKvpItems, and RemoveKvpItems. The guest can read this file. Changing it has no useful effect.
  • .kvp_pool_1: The root account can write to this file from within the guest. It is the equivalent of HKLMSOFTWAREMicrosoftVirtual MachineGuest on Windows guests. The daemon will transfer messages up to the host. From the host side, its messages can be retrieved from the GuestExchangeItems field of the WMI object.
  • .kvp_pool_2: The daemon will automatically write information about the Linux guest into this file. However, you never see any of the information from the guest side. The host can retrieve it through the GuestIntrinsicExchangeItems field of the WMI object. It is the equivalent of the HKLMSOFTWAREMicrosoftVirtual MachineAuto key on Windows guests. You can’t do anything useful with the file on Linux.
  • .kvp_pool_3: The host will automatically send information about itself and the virtual machine through this file. You can read the contents of this file, but changing it does nothing useful. It is the equivalent of the HKLMSOFTWAREMicrosoftVirtual MachineGuestParameter key on Windows guests.
  • .kvp_pool_4: I have no idea what this file does or what it is for.

What is the Format of the Hyper-V KVP Exchange File on Linux?

Each file uses the same format.

One KVP entry is built like this:

  • 512 bytes for the key. The key is a sequence of non-null bytes, typically interpreted as
    char. According to the documentation, it will be processed as using UTF8 encoding. After the characters for the key, the remainder of the 512 bytes is padded with null characters.
  • 2,048 bytes for the value. As with the key, these are non-null bytes typically interpreted as
    char. After the characters for the value, the remainder of the 2,048 bytes is padded with null characters.

KVP entries are written end-to-end in the file with no spacing, headers, or footers.

For the most part, you’ll treat these as text strings, but that’s not strictly necessary. I’ve been on this rant before, but the difference between “text” data and “binary” data is 100% semantics, no matter how much code we write to enforce artificial distinctions. From now until the point when computers can process something other than low voltage/high voltage (0s and 1s), there will never be anything but binary data and binary files. On the Linux side, you have 512 bytes for the key and 2,048 bytes for the value. Do with them as you see fit. However, on the host side, you’ll still need to get through the WMI processing. I haven’t pushed that very far.

How Do I Use Hyper-V KVP Exchange for Linux?

This is the part where it gets fun. Microsoft only goes so far as to supply the daemon. If you want to push or pull data, that’s all up to you. Or third parties.

But really, all you need to do is read to and/or write from files. The trick is, you need to be able to do it using the binary format that I mentioned above. If you just use a tool that writes simple strings, it will improperly pad the fields, resulting in mangled transfers. So, you’ll need a bit of proficiency in whatever tool you use. The tool itself doesn’t matter, though. Perl, Python, bash scripts,… anything will do. Just remember these guidelines:

  • Writing to files _0, _2, _3, and _4 just wastes time. The host will never see it, it will break KVP clients, and the files’ contents will be reset when the daemon restarts.
  • You do not need special permission to read from any of the files.
  • _1 is the only file that it’s useful to write to. You can, of course, read from it.
    • Deleting the existing contents deletes those KVPs. You probably want to update existing or append new.
    • The host only receives the LAST time that a KVP is set. Meaning that if you write a KVP with key “NewKey” twice in the _1 file, the host will only receive the second one.
    • Delete a KVP by zeroing its fields.
  • If the byte lengths are not honored properly, you will damage that KVP and every KVP following.

Source Code for a Hyper-V KVP Exchange Utility on Linux

I’ve built a small utility that can be used to read, write, and delete Hyper-V KVPs on Linux. I wrote it in C++ so that it can be compiled into a single, neat executable.

The long-term goal is to move this to my Github account. I have a number of things on my to-do list before that can happen. However, any updates will be published there. The listing on this article will be forever locked in a 1.0 state.

Compile Instructions

Each file is set so that they all live in the same directory. Use
make to build the sources and
sudo make install to put the executable into the /bin folder.

Install Instructions

Paste the contents of all of these files into accordingly-named files. File names are in the matching section header and in the code preview bar.

Transfer all of the files to your Linux system. It doesn’t really matter where. They just need to be in the same folder.


Usage Instructions

Get help with:

  • hvkvp –help
  • hvkvp read –help
  • hvkvp write –help
  • hvkvp delete –help

Each includes the related keys for that command and some examples.

Code Listing

The file list:

  • makefile
  • main.cpp
  • hvkvp.h
  • hvkvp.cpp
  • hvkvpfile.h
  • hvkvpfile.cpp
  • hvkvpreader.h
  • hvkvpreader.cpp
  • hvkvpremover.h
  • hvkvpremover.cpp
  • hvkvpwriter.h
  • hvkvpwriter.cpp























More in this series:

Part 1: Explanation

Part 2: Implementation

Have any questions or feedback?

Leave a comment below!

New Webinar: Future-Proofing your Datacenter with Microsoft Azure Stack

11 Aug 2017 by Altaro Software     0     Altaro News

If you’ve been working in IT for the last year or two, you’ve surely heard of Microsoft Azure Stack in some manner. Many an IT Pro who isn’t familiar with it will likely hear it and make the assumption that it’s just another Azure component considering Azure’s rather fast development cycle. If you’re one of those people though, what if I told you Microsoft Azure Stack (or MAS) allows you to bring the power of Azure to your very own datacenter? Imagine being able to host workloads seamlessly in the public cloud and on-premise and have one unified management methodology for each. This is just a taste of the power of Azure Stack inside of your datacenter.

If this sounds compelling, then we’ve got -webinar-azurestack&ALP=webinar-azurestack-“>just the webinar for you!

Join Microsoft Cloud and Datacenter Management MVPs Thomas Maurer and Andy Syrewicze in our latest Altaro educational webinar. They will be providing you with the information, tools, and skills you need to prepare for the arrival of this new datacenter technology and prepare your organization for the age of private hosted clouds on MAS.

In this webinar, you’ll learn:

  • What is Microsoft Azure Stack
  • Exactly how it will change your datacenter
  • How to start testing it in your own business today

At the end of the session, we’ll also run a Q&A on the topic, as always, during which you can ask Thomas and Andy your questions!

Interested? -webinar-azurestack&ALP=webinar-azurestack-“>Register here – limited seats available!

New Altaro Hyper-V webinar Microsoft Azure Stack

Date: Monday, August 28th, 2017
Time: For US: 10am PT / 1pm ET. For RoW: 4pm CET.

-webinar-azurestack&ALP=webinar-azurestack-” target=”_blank”>Azure Stack Webinar Registration

Do you already have questions about Microsoft Azure Stack?

Andy and Thomas will be taking your questions and tackling as many as they can during the Q&A part of our Azure Stack webinar.

If you have any questions on the topic that you’d like to get answered, leave a comment below or ask your questions during the webinar!

The Complete Guide to Hyper-V Networking

08 Aug 2017 by Eric Siron     0     Hyper-V Articles

I frequently write about all sorts of Hyper-V networking topics. I was surprised to learn that we’ve never published a unified article that gives a clear and complete how-to that brings all of these related topics into one resource. We’ll fix that right now.

Understanding the Basics of Hyper-V Networking

We have produced copious amounts of material explaining the various concepts around Hyper-V networking. I want to spend as little time as possible on that here. Comprehension is very important, though, so here’s an index of expository work:

  • How the Hyper-V Virtual Switch Works: If you don’t understand the contents of that article, you will have a very difficult time administering Hyper-V. Read it, and read it again until you have absorbed it. It answers easily 90% of the questions that I receive about Hyper-V networking. If something there doesn’t make sense, ask.
  • The OSI model and Hyper-V: A quick read on the OSI model and a brief introduction to its relevance to Hyper-V. If you’ve been skimming over the terms “layer 2” and “layer 3” because you don’t have a solid understanding of them, read it.
  • Hyper-V and VLANs: That article ties closely to the OSI article. VLANs are a layer 2 technology. Due to common usage, newcomers often confuse them with layer 3 operations. I’m frequently asked about trunking multiple VLANs into a virtual machine, even though I’m fairly certain that most people don’t really want to do that. This article should help you sort out those concepts.
  • Hyper-V and IP: That article also ties closely to the OSI article and contrasts against the VLAN article. It doesn’t contain a great deal of direct Hyper-V knowledge, but it should help fill any of the most serious deficiencies in TCP/IP comprehension.
  • Hyper-V and Link Aggregation (Teaming): That article describes the concepts around NIC teaming and addresses some of the myths that I encounter. The article that you’re reading now will bring you the “how”.
  • Hyper-V and DNS: If I were to compile a list of ridiculously simple technologies that people tend to ridiculously over-complicate, I’d place DNS in the top slot. Hyper-V itself cares nothing about DNS, but its management operating systems and guests care very much. Poor DNS configurations can be blamed for nearly all of the world’s technological ills. You must learn it. It won’t take long.
  • Hyper-V and Binding Order: Lots of administrators spend lots of time wringing their hands over network binding order. Stop. Only the DNS subsystem and one other thing (that I’ve now forgotten about) pay any attention to the binding order. If you get that, then you don’t really need to read the linked article.
  • Hyper-V and Load Balancing Algorithms: The “hows” of load balancing algorithms will be on display in the article that you’re reading. If you want to understand the “what” and the “why”, then follow the link.
  • Hyper-V and MPIO and Teaming for Storage: I see lots of complaints from people that create a switch independent team on a pair of 10GbE pipes that wind back to a storage array with 5x 10,000 RPM disks. They test it with a file copy and don’t understand why they can’t move 20Gbps. Invariably, they blame Hyper-V. If you don’t want to be that guy, the linked article should help.

That should serve as a decent reference on the concepts. If you don’t understand something written below, it’s probably because you don’t understand something linked above.

Contents of this Article

I will demonstrate common PowerShell and, where available, GUI methods for working with:

  • Standard network adapter teaming
  • Hyper-V virtual switch
  • Switch Embedded Teaming
  • Hyper-V virtual adapters

PowerShell or GUI?

Use PowerShell for quick, precise, repeatable, scriptable operations. Use the GUI to do the same amount of work in twice the time following four times as many instructions. I will show all of the PowerShell methods first for the benefit of those that just want to get things done. If you prefer to plod through dozens of GUI screens, scroll to the bottom half of the article. Be aware that many things can’t be done in the GUI.

If you’re just getting started with PowerShell, remember to use tab completion! It makes all the difference!

Creating and Working with Network Adapter Teams for Hyper-V in PowerShell

If you’re interested in Switch Embedded Teaming (Server 2016 only), then look a few headings downward. This section applies to the standard Microsoft teaming methods.

First things first. You need to know which adapters to add to the team. Discover your available adapters:

I’ll use my system for reference. I’ve renamed all of the adapters in my system so that I can recognize them. If your hardware supports Consistent Device Naming, then you’ll likely already have actionable names (like “Slot 4 Port 1”). If not, you’ll need to find your own way to identify adapters. I use my switch’s interface to enable the ports one at a time, identifying the adapters as they switch to Connected status.

Creating and Working with Network Adapter Teams for Hyper-V

The PowerShell cmdlets for networking allow you to use names, indexes, or descriptions to manipulate adapters. The teaming cmdlets only work with names.

Create a Windows Team

Create teams with New-NetLbfoTeam.

I use my demo machines’ “P*L” adapters for Hyper-V teams. One way to create a team for them:

I usually Name my team for the virtual switch that I create on it, but choose any name that you like. The TeamMembers field accepts a comma-separated list of the names of the physical adapters to add to the team. I promised not to go into detail on the options, and I won’t. Just remember that the other parameters and their values are selectable by tab completion. SwitchIndependent is the preferred teaming mode in most cases with LACP being second. I have never seen any compelling reason to use a load balancing algorithm other than Dynamic.

To save even more time and space, the cmdlet is smart enough to allow you to use wildcards for the adapter names:

If you want to avoid the prompt for scripting purposes, add the Force parameter.

A Note on the Team NIC

When you create a team, you also create a logical adapter that represents that team. A logical team NIC (often abbreviated as tNIC) works in a conceptually similar fashion to a Hyper-V virtual NIC. You treat it just like you would a physical adapter — give it an IP address, etc. The team determines what to do with your traffic. If you use the cmdlets as shown above, one team NIC will be created and it will have the same name as the team (“vSwitch”, in this case). You can override that name with the TeamNicName parameter.

You can also add more team NICs to a team. For a team that hosts a Hyper-V virtual switch, it’s neither recommended nor supported, although the system will allow it. Additional tNICs must be created in their own VLAN, which hides that VLAN from the team. Also, it’s not documented or clear how additional tNICs affect any QoS settings on a Hyper-V virtual switch.

For the rest of this article, the single automatically-created tNIC will be the only one referenced.

Examine Teams and tNICs

View all teams and their statuses with Get-NetLbfoTeam. You don’t need to supply any parameters. I get more use out of Get-NetLbfoTeamMember, also without parameters.

Remove and Add Team Members

You can easily remove team members if you have the need:

And add them:

Removing an adapter obviously disrupts the traffic on that member, but the team will handle it well. You can add a team member at any time.

Delete a Team

Use Remove-NetLbfoTeam to get rid of a team. You can use the Name parameter to reverse what you’ve done. Since my hosts only ever use a single team, I can do this:

Working with the Hyper-V Virtual Switch

I always use Hyper-V virtual switches and Microsoft teams together, so I have a certain technique. You may choose a different path. Just understand that external switches must be created on an adapter. I will always use the default tNIC. If you’re not teaming, then you’ll pick a single physical NIC. Use Get-NetAdapter as shown in the teaming section above to determine the name of the adapter that you wish to use.

Create a Virtual Switch

Use New-VMSwitch to create a new switch. Most commonly, you’ll want the external type (refer to the articles linked at the beginning if you need an explanation). External switches require you to specify a logical or physical (but not virtual) adapter. You can use its friendly name or its less friendly description. I use the name. In my case, I’m binding to a team’s logical adapter, so, as explained a bit ago, I’ll use the team’s name.

For internal or private, use the SwitchType parameter instead of the NetAdapterName parameter and do not use AllowManagementOS.

Several things to note about the New-VMSwitch cmdlet:

  • New-VMSwitch is not one of the better-developed cmdlets. Usually, when tabbing through available parameters, your options are presented in a sensible order. New-VMSwitch’s parameters are all over the place.
  • The documentation for every version of New-VMSwitch always says that the default MinimumBandwidthMode is Weight. I used to classify this as an error, but it’s been going on for so long I’m starting to wonder if it’s an unfunny inside joke or a deliberate lie. The default is Absolute. Most people won’t ever need QoS, so I don’t know that it has practical importance. However, you can’t change a switch’s QoS mode after it’s been created, so I’d rather tell you this up front.
  • The “AllowManagementOS” parameter’s name is nonsense. What it really means is “immediately create a virtual adapter for the management operating system”. The only reason that I don’t allow it to create one is because it uses the same name for the virtual adapter as the virtual switch. That’s confusing for people that don’t know how all of this works. You can always add virtual adapters later, so the “allow” verb makes no sense whatsoever.

Manipulate a Virtual Switch

Use Set-VMSwitch to make changes to your switch. The cmdlet has so many options that I can’t rationally explain it all. Just scan the parameter list to find what you want. A couple of notes, though:

  • You can’t change the QoS mode of an existing virtual switch.
  • You can switch between External, Internal, and Private types.
    • To go from External to either of the other types: Set-VMSwitch -Name vSwitch -SwitchType Internal. Just use Private instead of Internal if you want that switch type.
    • To from Private or Internal to External: Set-VMSwitch -Name vSwitch -NetAdapterName vSwitch. You’d also use this format to move a virtual switch from one physical/logical network adapter to another.
  • You can’t rename a virtual switch with this cmdlet. Use Rename-VMSwitch.

Remove a Virtual Switch

Appropriately enough, Remove-VMSwitch removes a virtual switch.

You can remove all virtual switches in one shot:

When a switch is removed, virtual NICs on VMs are disconnected. Virtual NICs for the management OS are destroyed.

Speaking of virtual NICs, that’s the next thing you care about if you’re using a standard virtual switch. I’ll explain them after the Switch Embedded Team section.

Working with Hyper-V Switch Embedded Teams

Server 2016 adds Switch Embedded Teaming. If you’re planning to create a team of gigabit adapters, then I recommend that you use the traditional teaming method outlined above. I wrote an article explaining why.

Create a Switch Embedded Team (SET)

Use the familiar New-VMSwitch to set it up, but add the EnableEmbeddedTeaming option. Two other options not shown in the following are EnableIov and EnablePacketDirect.

The documentation continues to be wrong on MinimumBandwidthMode. If you don’t specify otherwise, you get Absolute. Prefer Weight.

Use EnableIov if, and only if, you have 10GbE adapters that support it. I cannot find any details on Packet Direct anywhere. Everyone just repeats that it provides a low-latency connection that bypasses the virtual switch. A few sources add that it will force Hyper-V Port load balancing mode. My hardware doesn’t support it, so I can’t test it. I assume that it only works on 10GbE and probably only with SR-IOV.

Once a SET has been created, you view it with both Get-VMSwitch and Get-VMSwitchTeam. For whatever reason, they decided that the output should include the difficult-to-read interface descriptions instead of names like Get-NetLbfoTeam. You can see the adapter names with something like this:

The SET cmdlets have no analog for Get-NetLbfoTeamMember.

SET does not expose a logical adapter to Windows the way that LBFO does.

Manipulate a Switch Embedded Team

You can change the members and the load balancing mode for a SET using Set-VMSwitchTeam.

Add and Remove SET Members

Instead of Set-VMSwitchTeam, you can use Add-VMSwitchTeamMember and Remove-VMSwitchTeamMember.

Remove a SET

Use Remove-VMSwitch to remove a SET. There is no Remove-VMSwitchTeam cmdlet.

Working with Virtual Network Adapters

You can attach virtual network adapters (vNICs) to the management operating system or virtual machines. You’ll most commonly use them with virtual machines, but you’ll also usually do less work with them. Their default settings tend to be sufficient and you can work with them through their owning virtual machine’s GUI property pages.

For almost every vNIC-related cmdlet, you must indicate whether you’re working with a management OS vNIC or a VM’s vNIC. Do this with the ManagementOS switch parameter or by supplying a value for either the VM or the VMName parameters. If you have a vNIC object, such as the one output by Get-VMNetworkAdapter, then you can pipe it to most of the vNIC cmdlets or provide it as the VMNetworkAdapter parameter. You won’t need to specify any of the other identifying parameters, including those previously mentioned in this paragraph, when you provide the vNIC object.

View a Virtual Network Adapter

The simple act of creating a virtual machine or virtual switch with AllowManagementOS set, creates a vNIC. To view them all:

Ordinarily, we give descriptive names to management OS vNICs, especially when we use more than one. If you didn’t specify AllowManagementOS, then you’ll have a vNIC with the same name as your vSwitch.

Each management OS vNIC will appear in the Network Connections applet and Get-NetAdapter with the format vEthernet (vNICName). Avoid confusion by changing the default vNIC’s name (shown in a bit). Many newcomers believe that this vNIC is the virtual switch because of that name. You cannot “see” the virtual switch anywhere except in Hyper-V-specific management tools.

Ordinarily, we leave the default name of “Network Adapter” for virtual machine vNICs. New in 2016, changes to a guest’s vNIC name will appear in the guest operating system if it supports Consistent Device Naming (CDN).

Manipulate a Virtual Network Adapter

Use Set-VMNetworkAdapter to change vNIC settings. As you can see, this cmdlet is quite busy; I could write multiple full-length articles on various parameter groups. Settings categories available with this command:

  • Quality of service (Qos)
  • Security (MAC spoofing, router guard, DHCP guard, storm)
  • Replica
  • In-guest teaming
  • Performance (VMQ, IOV, vRSS, Packet Direct)

You need a different cmdlet for VLAN manipulation, though.

Manipulate Virtual Network Adapter VLANs

Use Set-VMNetworkAdapterVlan for all things VLAN on vNICs.

To place a management OS vNIC into a VLAN:

Remember that the VlanId parameter requires the Access parameter.

Also remember that there is no such thing as “VLAN 0”. For some unknown reason, the cmdlet will accept it and assign the adapter to VLAN 0, but strange things might happen. Usually, it’s just that you can’t get traffic in or out of the adapter. If you want to clear the adapter’s VLAN, don’t use VLAN 0. Use Untagged:

I’m not going to cover trunking or private VLANs. Trunking is very advanced and I don’t think more than 5 percent of the people that have asked me how to do it really wanted to do it. If you want a single virtual machine to exist in multiple VLANs, add virtual adapters and assign individual VLANs. Private VLANs require you to work with PrimaryVlanId, SecondaryVlanId, SecondaryVlanIdList, Promiscuous, Community, and Isolated as necessary. If you need to use private VLANs, then you or your networking engineer should already understand each of these terms and intuitively understand how to use the parameters.

Since we’re commonly asked, the Promiscuous parameter on Set-VMNetworkAdapterVlan does not have anything to do with accepting or participating in all passing layer 2 traffic. It is only for private VLANs.

Adding and Removing Virtual Network Adapters

Use Add-VMNetworkAdapter and Remove-VMNetworkAdapter for their respective tasks.

Connecting and Disconnecting Virtual Network Adapters to/from Virtual Switches

These cmdlets only work for virtual machine vNICs. You cannot dis/connect management OS vNICs; you can only add or remove them.

Connect always works. You do not need to disconnect an adapter from its current virtual switch to connect it to a new one. If you want to connect all of a VM’s vNICs to the same switch, specify only the virtual machine in VMName.

If you provide the Name parameter, then only that vNIC will be altered:

These two cmdlets do not provide a VM parameter. It is possible for two virtual machines to have the same name. If you need to discern between two VMs with the same name, use the pipeline and filter from other cmdlets:

Use Disconnect-VMNetworkAdapter the same way, leaving off the SwitchName parameter.

VLAN information is preserved across dis/connects.

Other vNIC Settings

I did not touch on the entire range of possible vNIC cmdlets or their settings. You can go to the root 2016 Hyper-V PowerShell page and view all available cmdlets. Search the page for adapter, and you’ll find many hits.

Using the GUI for Hyper-V Networking

The GUI lags dramatically behind PowerShell for most things related to Hyper-V. I doubt any category shows that as strongly as networking. So, whether you (or I) like it or not, using the GUI for Hyper-V network qualifies as “beginner mode”. Most of the things that I showed you above cannot be done in the GUI at all. So, unless you’re managing a single host with a single network adapter, the GUI will probably not help you much.

The following sections show you the few things that you can do in the GUI.

Working with Windows Teams

The GUI does allow you some decent capability when working with Windows teams.

Create a Windows Team

You can use the GUI to create teams on Server 2012 and later. You can find the applet in Server Manager on the Local Server tab.

Using the GUI for Hyper-V Networking

You can also run lbfoadmin.exe from the Run window or an elevated prompt.

Once open, click the Tasks drop-down in the Teams section. Click New Team.

Tasks drop-down in the Teams section

You’ll get the NIC Teaming/New team dialog, where you’ll need to fill out most fields:

NIC Teaming/New team

Manipulate a Team

To make changes to your team later, just return to the same screens and dialogs using the same methods as you used to create the team.

Manipulate a Team

Delete a Team

To delete a team, use the Delete function in the same place on the main lbfoadmin screen where you found the New Team function. Make sure to highlight the team that you want to delete, first.

Delete a Team

Working with the Hyper-V Virtual Switch

The GUI provides very limited ability to work with Hyper-V virtual switches. You can’t configure QoS (except on vNICs) and it allows nearly nothing to be done for management OS vNICs.

Create a Hyper-V Virtual Switch

When using the Add Roles wizard to enable Hyper-V, you can create a virtual switch. I won’t cover that. If you’re looking at that screen, wondering what to do, I recommend that you skip it and follow the PowerShell directions above. If you simply must use a GUI, then wait until after the role finishes installing and create one using Hyper-V Manager.

To create a new virtual switch in Hyper-V Manager:

  1. Right-click the host in Hyper-V Manager and click Virtual Switch Manager. Alternatively, you’ll find this same menu at the far right of the main screen under Actions.
    Working with the Hyper-V Virtual Switch
  2. At the left of the dialog, highlight New virtual network switch.
    Create a Hyper-V Virtual Switch
  3. On the right, choose the type of switch that you want to create. I’m not entirely sure why it even asks because you can pick anything you want once you click Create Virtual Switch.
    Create a Hyper-V Virtual Switch Type
  4. The creation screen itself is very busy. I’ll tackle that in a moment. First, look to the left of the dialog at the blue text. It’s a new entry named New Virtual Switch. It represents what you’re working on now. If you change the name, you’ll see this list item change as well. You can use Apply to make changes and continue working without closing the dialog. You can even add another switch before you accept this one.
    New Virtual Switch

Now for the new switch screen. Look after the screenshot for an explanation of the items:

Virtual Switch properties - Hyper-V Networking

First item: name your switch.

I would skip the notes field, especially in a failover cluster.

For Connection Type, you’re decided between External, Internal, and Private. That’s why I don’t understand why it asked you on the initial dialog. If you choose External, you’ll need to pick a logical or physical adapter for binding. Unfortunately, you can only see the fairly useless adapter description fields. Look in the Network Connections applet to determine which is which. This right here is one of the primary reasons I like switch creation in PowerShell better.

Remember that the IOV setting is permanent.

I despise the item here called Allow management operating system to share this network adapter. That description has absolutely no relation to what the checkbox does. If you check it, it will automatically create a virtual NIC in the management OS for this virtual switch and give it the same name as the virtual switch. That’s all it does. There is no “sharing”, and there is no permanent allowing or disallowing going on.

The VLAN ID section ties to the nonsensical “Allow…” field. If you let the system create a management OS vNIC for you, then you can use this to give it a VLAN ID.

You can use the Remove button if you decide that you don’t want to create the virtual switch after all. Cancel would work, too.

Where’s the QoS? Oh, you can’t set the QoS mode for a virtual switch using the GUI. PowerShell only. If you use this screen to create a virtual switch, it will use the Absolute QoS mode. Forever. Another reason to choose PowerShell.

Manipulate a Virtual Switch

To make changes to a virtual switch, follow the exact steps that you did to create one, except choose the existing virtual switch at the left of the Virtual Switch Manager dialog. Of course, you can’t change much, but there it is.

Remove a Virtual Switch

Retrace your creation steps. Select the virtual switch at the left of the Virtual Switch Manager screen. Click the Remove button at the bottom right.

Working with Hyper-V Switch Embedded Teams

You can’t use the GUI to work with Hyper-V SET. PowerShell-only.

You can use the Virtual Switch Manager as described previously to remove one, though.

Working with Hyper-V Virtual Network Adapters

The GUI provides passably decent ability to work with vNICs — for guests. The only place that you can do anything with management OS vNICs is on that virtual switch creation screen. You can add or remove exactly one vNIC and you can set or remove its VLAN. You can’t use the GUI to work with two or more management OS vNICs. In fact, if you use PowerShell to add a second management OS vNIC, all related items in the dialog are grayed out and unusable.

But, for virtual machines, the GUI exposes most functionality.

Manipulate Virtual Network Adapters on Virtual Machines

In Hyper-V Manager or Failover Cluster Manager, open up the Settings dialog for the virtual machine to work with. On the left, you can find the vNIC that you want to work with. Highlight it, and the page will switch to its configuration screen. In the following screenshot, I’ve also expanded the vNIC so that you can see its subtabs, Hardware Acceleration and Advanced Features.

Manipulate Virtual Network Adapters on Virtual Machines

On this screen, you can change the virtual switch this adapter connects to, or disconnect it. You can change or remove its VLAN. You can set its QoS. The numbers here are given in Absolute since that’s the default. It doesn’t change if your switch uses Weight mode. I would use PowerShell for that. You can also Remove the vNIC here.

The Hardware Acceleration subtab:

Hardware Acceleration

Here, you can change:

  • If a VMQ can be assigned to this vNIC. The host’s adapters must support VMQ and a queue must be available for this checkbox to have any effect.
  • IPSec task offloading. If the host’s physical adapter supports IPSec task offloading and has sufficient resources, the guest can offload IPSec tasks to the hardware.
  • An SR-IOV virtual function can be assigned to this NIC. The host’s adapters and motherboard must support IOV, it must be enabled on the adapter and in BIOS, the virtual switch must either be unteamed or on a SET, and a virtual function must be available for this checkbox to have any effect.

The Advanced Features subtab:

Advanced Features

Note that this screen scrolls, and I didn’t capture it all.

Here, you can change:

  • MAC address, mode and address both
  • Whether or not the guest can spoof the MAC
  • If the guest is prevented from receiving DHCP discover/request frames
  • If the guest is prevented from receiving router discovery packets
  • If a failover cluster will move the guest if it loses network connectivity (Protected network)
  • If the vNIC’s traffic is mirrored to another vNIC. This feature seems to have troubles, FYI.
  • If teaming is allowed in the guest. The guest requires at least two vNICs and the virtual switch must be placed on a team or SET for this to function.
  • The Device naming switch allows the name of the vNIC to be propagated into the guest where an OS that supports Consistent Device Naming (CDN) can use it. Note that this is disabled by default, and the GUI doesn’t allow you to rename the vNIC. Use PowerShell for that.

Remove a Virtual Network Adapter

To remove a vNIC from a guest, find its tab in the VM’s settings dialog in Hyper-V Manager or Failover Cluster Manager. Use the Remove button at the bottom right. You’ll find a screenshot above in the Manipulate Virtual Network Adapters on Virtual Machines section.

Note: This guide will be periodically updated to make sure it covers all possible Hyper-V Networking problems. If you think I’ve missed anything please let me know in the comments below.

Have any questions or feedback?

Leave a comment below!

Hyper-V virtual machine gallery and networking improvements

In January, we added Quick Create to Hyper-V manager in Windows 10.  Quick Create is a single-page wizard for fast, easy, virtual machine creation.

Starting in the latest fast-track Windows Insider builds (16237+) we’re expanding on that idea in two ways.  Quick Create now includes:

  1. A virtual machine gallery with downloadable, pre-configured, virtual machines.
  2. A default virtual switch to allow virtual machines to share the host’s internet connection using NAT.


To launch Quick Create, open Hyper-V Manager and click on the “Quick Create…” button (1).

From there you can either create a virtual machine from one of the pre-built images available from Microsoft (2) or use a local installation source.  Once you’ve selected an image or chosen installation media, you’re done!  The virtual machine comes with a default name and a pre-made network connection using NAT (3) which can be modified in the “more options” menu.

Click “Create Virtual Machine” and you’re ready to go – granted downloading the virtual machine will take awhile.

Details about the Default Switch

The switch named “Default Switch” or “Layered_ICS”, allows virtual machines to share the host’s network connection.  Without getting too deep into networking (saving that for a different post), this switch has a few unique attributes compared to other Hyper-V switches:

  1. Virtual machines connected to it will have access to the host’s network whether you’re connected to WIFI, a dock, or Ethernet.
  2. It’s available as soon as you enable Hyper-V – you won’t lose internet setting it up.
  3. You can’t delete it.
  4. It has the same name and device ID (GUID c08cb7b8-9b3c-408e-8e30-5e16a3aeb444) on all Windows 10 hosts so virtual machines on recent builds can assume the same switch is present on all Windows 10 Hyper-V host.

I’m really excited by the work we are doing in this area.  These improvements make Hyper-V a better tool for people running virtual machines on a laptop.  They don’t, however, replace existing Hyper-V tools.  If you need to define specific virtual machine settings, New-VM or the new virtual machine wizard are the right tools.  For people with custom networks or complicated virtual network needs, continue using Virtual Switch Manager.

Also keep in mind that all of this is a work in progress.  There are rough edges for the default switch right now and there aren’t many images in the gallery.  Please give us feedback!  Your feedback helps us.  Let us know what images you would like to see and share issues by commenting on this blog or submitting feedback through Feedback Hub.


Powered by WPeMatico