Tag Archives: find

95 Best Practices for Optimizing Hyper-V Performance

We can never get enough performance. Everything needs to be faster, faster, faster! You can find any number of articles about improving Hyper-V performance and best practices, of course, unfortunately, a lot of the information contains errors, FUD, and misconceptions. Some are just plain dated. Technology has changed and experience is continually teaching us new insights. From that, we can build a list of best practices that will help you to tune your system to provide maximum performance.

How to boost hyper-V performance - 95 best practices

Philosophies Used in this Article

This article focuses primarily on performance. It may deviate from other advice that I’ve given in other contexts. A system designed with performance in mind will be built differently from a system with different goals. For instance, a system that tries to provide high capacity at a low price point would have a slower performance profile than some alternatives.

  • Subject matter scoped to the 2012 R2 and 2016 product versions.
  • I want to stay on target by listing the best practices with fairly minimal exposition. I’ll expand ideas where I feel the need; you can always ask questions in the comments section.
  • I am not trying to duplicate pure physical performance in a virtualized environment. That’s a wasted effort.
  • I have already written an article on best practices for balanced systems. It’s a bit older, but I don’t see anything in it that requires immediate attention. It was written for the administrator who wants reasonable performance but also wants to stay under budget.
  • This content targets datacenter builds. Client Hyper-V will follow the same general concepts with variable applicability.

General Host Architecture

If you’re lucky enough to be starting in the research phase — meaning, you don’t already have an environment — then you have the most opportunity to build things properly. Making good purchase decisions pays more dividends than patching up something that you’ve already got.

  1. Do not go in blind.
    • Microsoft Assessment and Planning Toolkit will help you size your environment: MAP Toolkit
    • Ask your software vendors for their guidelines for virtualization on Hyper-V.
    • Ask people that use the same product(s) if they have virtualized on Hyper-V.
  2. Stick with logo-compliant hardware. Check the official list: https://www.windowsservercatalog.com/
  3. Most people will run out of memory first, disk second, CPU third, and network last. Purchase accordingly.
  4. Prefer newer CPUs, but think hard before going with bleeding edge. You may need to improve performance by scaling out. Live Migration requires physical CPUs to be the same or you’ll need to enable CPU compatibility mode. If your environment starts with recent CPUs, then you’ll have the longest amount of time to be able to extend it. However, CPUs commonly undergo at least one revision, and that might be enough to require compatibility mode. Attaining maximum performance may reduce virtual machine mobility.
  5. Set a target density level, e.g. “25 virtual machines per host”. While it may be obvious that higher densities result in lower performance, finding the cut-off line for “acceptable” will be difficult. However, having a target VM number in mind before you start can make the challenge less nebulous.
  6. Read the rest of this article before you do anything.

Management Operating System

Before we carry on, I just wanted to make sure to mention that Hyper-V is a type 1 hypervisor, meaning that it runs right on the hardware. You can’t “touch” Hyper-V because it has no direct interface. Instead, you install a management operating system and use that to work with Hyper-V. You have three choices:

Note: Nano Server initially offered Hyper-V, but that functionality will be removed (or has already been removed, depending on when you read this). Most people ignore the fine print of using Nano Server, so I never recommended it anyway.

TL;DR: In absence of a blocking condition, choose Hyper-V Server. A solid blocking condition would be the Automatic Virtual Machine Activation feature of Datacenter Edition. In such cases, the next preferable choice is Windows Server in Core mode.

I organized those in order by distribution size. Volumes have been written about the “attack surface” and patching. Most of that material makes me roll my eyes. No matter what you think of all that, none of it has any meaningful impact on performance. For performance, concern yourself with the differences in CPU and memory footprint. The widest CPU/memory gap lies between Windows Server and Windows Server Core. When logged off, the Windows Server GUI does not consume many resources, but it does consume some. The space between Windows Server Core and Hyper-V Server is much tighter, especially when the same features/roles are enabled.

One difference between Core and Hyper-V Server is the licensing mechanism. On Datacenter Edition, that does include the benefit of Automatic Virtual Machine Activation (AVMA). That only applies to the technological wiring. Do not confuse it with the oft-repeated myth that installing Windows Server grants guest licensing privileges. The legal portion of licensing stands apart; read our eBook for starting information.

Because you do not need to pay for the license for Hyper-V Server, it grants one capability that Windows Server does not: you can upgrade at any time. That allows you to completely decouple the life cycle of your hosts from your guests. Such detachment is a hallmark of the modern cloud era.

If you will be running only open source operating systems, Hyper-V Server is the natural choice. You don’t need to pay any licensing fees to Microsoft at all with that usage. I don’t realistically expect any pure Linux shops to introduce a Microsoft environment, but Linux-on-Hyper-V is a fantastic solution in a mixed-platform environment. And with that, let’s get back onto the list.

Management Operating System Best Practices for Performance

  1. Prefer Hyper-V Server first, Windows Server Core second
  2. Do not install any software, feature, or role in the management operating system that does not directly aid the virtual machines or the management operating system. Hyper-V prioritizes applications in the management operating system over virtual machines. That’s because it trusts you; if you are running something in the management OS, it assumes that you really need it.
  3. Do not log on to the management operating system. Install the management tools on your workstation and manipulate Hyper-V remotely.
  4. If you must log on to the management operating system, log off as soon as you’re done.
  5. Do not browse the Internet from the management operating system. Don’t browse from any server, really.
  6. Stay current on mainstream patches.
  7. Stay reasonably current on driver versions. I know that many of my peers like to install drivers almost immediately upon release, but I can’t join that camp. While it’s not entirely unheard of for a driver update to bring performance improvements, it’s not common. With all of the acquisitions and corporate consolidations going on in the hardware space — especially networking — I feel that the competitive drive to produce quality hardware and drivers has entered a period of decline. In simple terms, view new drivers as a potential risk to stability, performance, and security.
  8. Join your hosts to the domain. Systems consume less of your time if they answer to a central authority.
  9. Use antivirus and intrusion prevention. As long you choose your anti-malware vendor well and the proper exclusions are in place, performance will not be negatively impacted. Compare that to the performance of a compromised system.
  10. Read through our article on host performance tuning.

Leverage Containers

In the “traditional” virtualization model, we stand up multiple virtual machines running individual operating system environments. As “virtual machine sprawl” sets in, we wind up with a great deal of duplication. In the past, we could justify that as a separation of the environment. Furthermore, some Windows Server patches caused problems for some software but not others. In the modern era, containers and omnibus patch packages have upset that equation.

Instead of building virtual machine after virtual machine, you can build a few virtual machines. Deploy containers within them. Strategies for this approach exceed the parameters of this article, but you’re aiming to reduce the number of disparate complete operating system environments deployed. With careful planning, you can reduce density while maintaining a high degree of separation for your services. Fewer kernels are loaded, fewer context switches occur, less memory contains the same code bits, fewer disk seeks to retrieve essentially the same information from different locations.

  1. Prefer containers over virtual machines where possible.

CPU

You can’t do a great deal to tune CPU performance in Hyper-V. Overall, I count that among my list of “good things”; Microsoft did the hard work for you.

  1. Follow our article on host tuning; pay special attention to C States and the performance power settings.
  2. For Intel chips, leave hyperthreading on unless you have a defined reason to turn it off.
  3. Leave NUMA enabled in hardware. On your VMs’ property sheet, you’ll find a Use Hardware Topology button. Remember to use that any time that you adjust the number of vCPUs assigned to a virtual machine or move it to a host that has a different memory layout (physical core count and/or different memory distribution).
    best pratices for optimizing hyper-v performance - settings NUMA configuration
  4. Decide whether or not to allow guests to span NUMA nodes (the global host NUMA Spanning setting). If you size your VMs to stay within a NUMA node and you are careful to not assign more guests than can fit solidly within each NUMA node, then you can increase individual VM performance. However, if the host has trouble locking VMs into nodes, then you can negatively impact overall memory performance. If you’re not sure, just leave NUMA at defaults and tinker later.
  5. For modern guests, I recommend that you use at least two virtual CPUs per virtual machine. Use more in accordance with the virtual machine’s performance profile or vendor specifications. This is my own personal recommendation; I can visibly detect the response difference between a single vCPU guest and a dual vCPU guest.
  6. For legacy Windows guests (Windows XP/Windows Server 2003 and earlier), use 1 vCPU. More will likely hurt performance more than help.
  7. Do not grant more than 2 vCPU to a virtual machine without just cause. Hyper-V will do a better job reducing context switches and managing memory access if it doesn’t need to try to do too much core juggling. I’d make exceptions for very low-density hosts where 2 vCPU per guest might leave unused cores. At the other side, if you’re assigning 24 cores to every VM just because you can, then you will hurt performance.
  8. If you are preventing VMs from spanning NUMA nodes, do not assign more vCPU to a VM than you have matching physical cores in a NUMA node (usually means the number of cores per physical socket, but check with your hardware manufacturer).
  9. Use Hyper-V’s priority, weight, and reservation settings with great care. CPU bottlenecks are highly uncommon; look elsewhere first. A poor reservation will cause more problems than it solves.

Memory

I’ve long believed that every person that wants to be a systems administrator should be forced to become conversant in x86 assembly language, or at least C. I can usually spot people that have no familiarity with programming in such low-level languages because they almost invariably carry a bizarre mental picture of how computer memory works. Fortunately, modern memory is very, very, very fast. Even better, the programmers of modern operating system memory managers have gotten very good at their craft. Trying to tune memory as a systems administrator rarely pays dividends. However, we can establish some best practices for memory in Hyper-V.

  1. Follow our article on host tuning. Most importantly, if you have multiple CPUs, install your memory such that it uses multi-channel and provides an even amount of memory to each NUMA node.
  2. Be mindful of operating system driver quality. Windows drivers differ from applications in that they can permanently remove memory from the available pool. If they do not properly manage that memory, then you’re headed for some serious problems.
  3. Do not make your CSV cache too large.
  4. For virtual machines that will perform high quantities of memory operations, avoid dynamic memory. Dynamic memory disables NUMA (out of necessity). How do you know what constitutes a “high volume”? Without performance monitoring, you don’t.
  5. Set your fixed memory VMs to a higher priority and a shorter startup delay than your Dynamic Memory VMs. This ensures that they will start first, allowing Hyper-V to plot an optimal NUMA layout and reduce memory fragmentation. It doesn’t help a lot in a cluster, unfortunately. However, even in the best case, this technique won’t yield many benefits.
  6. Do not use more memory for a virtual machine than you can prove that it needs. Especially try to avoid using more memory than will fit in a single NUMA node.
  7. Use Dynamic Memory for virtual machines that do not require the absolute fastest memory performance.
  8. For Dynamic Memory virtual machines, pay the most attention to the startup value. It sets the tone for how the virtual machine will be treated during runtime. For virtual machines running full GUI Windows Server, I tend to use a startup of either 1 GB or 2 GB, depending on the version and what else is installed.
  9. For Dynamic Memory VMs, set the minimum to the operating system vendor’s stated minimum (512 MB for Windows Server). If the VM hosts a critical application, add to the minimum to ensure that it doesn’t get choked out.
  10. For Dynamic Memory VMs, set the maximum to a reasonable amount. You’ll generally discover that amount through trial and error and performance monitoring. Do not set it to an arbitrarily high number. Remember that, even on 2012 R2, you can raise the maximum at any time.

Check the CPU section for NUMA guidance.

Networking

In the time that I’ve been helping people with Hyper-V, I don’t believe that I’ve seen anyone waste more time worrying about anything that’s less of an issue than networking. People will read whitepapers and forums and blog articles and novels and work all weekend to draw up intricately designed networking layouts that need eight pages of documentation. But, they won’t spend fifteen minutes setting up a network utilization monitor. I occasionally catch grief for using MRTG since it’s old and there are shinier, bigger, bolder tools, but MRTG is easy and quick to set up. You should know how much traffic your network pushes. That knowledge can guide you better than any abstract knowledge or feature list.

That said, we do have many best practices for networking performance in Hyper-V.

  1. Follow our article on host tuning. Especially pay attention to VMQ on gigabit and separation of storage traffic.
  2. If you need your network to go faster, use faster adapters and switches. A big team of gigabit won’t keep up with a single 10 gigabit port.
  3. Use a single virtual switch per host. Multiple virtual switches add processing overhead. Usually, you can get a single switch to do whatever you wanted multiple switches to do.
  4. Prefer a single large team over multiple small teams. This practice can also help you to avoid needless virtual switches.
  5. For gigabit, anything over 4 physical ports probably won’t yield meaningful returns. I would use 6 at the outside. If you’re using iSCSI or SMB, then two more physical adapters just for that would be acceptable.
  6. For 10GbE, anything over 2 physical ports probably won’t yield meaningful returns.
  7. If you have 2 10GbE and a bunch of gigabit ports in the same host, just ignore the gigabit. Maybe use it for iSCSI or SMB, if it’s adequate for your storage platform.
  8. Make certain that you understand how the Hyper-V virtual switch functions. Most important:
    • You cannot “see” the virtual switch in the management OS except with Hyper-V specific tools. It has no IP address and no presence in the Network and Sharing Center applet.
    • Anything that appears in Network and Sharing Center that you think belongs to the virtual switch is actually a virtual network adapter.
    • Layer 3 (IP) information in the host has no bearing on guests — unless you create an IP collision
  9. Do not create a virtual network adapter in the management operating system for the virtual machines. I did that before I understood the Hyper-V virtual switch, and I have encountered lots of other people that have done it. The virtual machines will use the virtual switch directly.
  10. Do not multi-home the host unless you know exactly what you are doing. Valid reasons to multi-home:
    • iSCSI/SMB adapters
    • Separate adapters for cluster roles. e.g. “Management”, “Live Migration”, and “Cluster Communications”
  11. If you multi-home the host, give only one adapter a default gateway. If other adapters must use gateways, use the old route command or the new New-NetRoute command.
  12. Do not try to use internal or private virtual switches for performance. The external virtual switch is equally fast. Internal and private switches are for isolation only.
  13. If all of your hardware supports it, enable jumbo frames. Ensure that you perform validation testing (i.e.:
    ping storage-ip -f -l 8000)
  14. Pay attention to IP addressing. If traffic needs to locate an external router to reach another virtual adapter on the same host, then traffic will traverse the physical network.
  15. Use networking QoS if you have identified a problem.
    • Use datacenter bridging, if your hardware supports it.
    • Prefer the Weight QoS mode for the Hyper-V switch, especially when teaming.
    • To minimize the negative side effects of QoS, rely on limiting the maximums of misbehaving or non-critical VMs over trying to guarantee minimums for vital VMs.
  16. If you have SR-IOV-capable physical NICs, it provides the best performance. However, you can’t use the traditional Windows team for the physical NICs. Also, you can’t use VMQ and SR-IOV at the same time.
  17. Switch-embedded teaming (2016) allows you to use SR-IOV. Standard teaming does not.
  18. If using VMQ, configure the processor sets correctly.
  19. When teaming, prefer Switch Independent mode with the Dynamic load balancing algorithm. I have done some performance testing on the types (near the end of the linked article). However, a reader commented on another article that the Dynamic/Switch Independent combination can cause some problems for third-party load balancers (see comments section).

Storage

When you need to make real differences in Hyper-V’s performance, focus on storage. Storage is slow. The best way to make storage not be slow is to spend money. But, we have other ways.

  1. Follow our article on host tuning. Especially pay attention to:
    • Do not break up internal drive bays between Hyper-V and the guests. Use one big array.
    • Do not tune the Hyper-V partition for speed. After it boots, Hyper-V averages zero IOPS for itself. As a prime example, don’t put Hyper-V on SSD and the VMs on spinning disks. Do the opposite.
    • The best ways to get more storage speed is to use faster disks and bigger arrays. Almost everything else will only yield tiny differences.
  2. For VHD (not VHDX), use fixed disks for maximum performance. Dynamically-expanding VHD is marginally, but measurably, slower.
  3. For VHDX, use dynamically-expanding disks for everything except high-utilization databases. I receive many arguments on this, but I’ve done the performance tests and have years of real-world experience. You can trust that (and run the tests yourself), or you can trust theoretical whitepapers from people that make their living by overselling disk space but have perpetually misplaced their copy of diskspd.
  4. Avoid using shared VHDX (2012 R2) or VHDS (2016). Performance still isn’t there. Give this technology another maturation cycle or two and look at it again.
  5. Where possible, do not use multiple data partitions in a single VHD/X.
  6. When using Cluster Shared Volumes, try to use at least as many CSVs as you have nodes. Starting with 2012 R2, CSV ownership will be distributed evenly, theoretically improving overall access.
  7. You can theoretically improve storage performance by dividing virtual machines across separate storage locations. If you need to make your arrays span fewer disks in order to divide your VMs’ storage, you will have a net loss in performance. If you are creating multiple LUNs or partitions across the same disks to divide up VMs, you will have a net loss in performance.
  8. For RDS virtual machine-based VDI, use hardware-based or Windows’ Hyper-V-mode deduplication on the storage system. The read hits, especially with caching, yield positive performance benefits.
  9. The jury is still out on using host-level deduplication for Windows Server guests, but it is supported with 2016. I personally will be trying to place Server OS disks on SMB storage deduplicated in Hyper-V mode.
  10. The slowest component in a storage system is the disk(s); don’t spend a lot of time worrying about controllers beyond enabling caching.
  11. RAID-0 is the fastest RAID type, but provides no redundancy.
  12. RAID-10 is generally the fastest RAID type that provides redundancy.
  13. For Storage Spaces, three-way mirror is fastest (by a lot).
  14. For remote storage, prefer MPIO or SMB multichannel over multiple unteamed adapters. Avoid placing this traffic on teamed adapters.
  15. I’ve read some scattered notes that say that you should format with 64 kilobyte allocation units. I have never done this, mostly because I don’t think about it until it’s too late. If the default size hurts anything, I can’t tell. Someday, I’ll remember to try it and will update this article after I’ve gotten some performance traces. If you’ll be hosting a lot of SQL VMs and will be formatting their VHDX with 64kb AUs, then you might get more benefit.
  16. I still don’t think that ReFS is quite mature enough to replace NTFS for Hyper-V. For performance, I definitely stick with NTFS.
  17. Don’t do full defragmentation. It doesn’t help. The minimal defragmentation that Windows automatically performs is all that you need. If you have some crummy application that makes this statement false, then stop using that application or exile it to its own physical server. Defragmentation’s primary purpose is to wear down your hard drives so that you have to buy more hard drives sooner than necessary, which is why employees of hardware vendors recommend it all the time. If you have a personal neurosis that causes you pain when a disk becomes “too” fragmented, use Storage Live Migration to clear and then re-populate partitions/LUNs. It’s wasted time that you’ll never get back, but at least it’s faster. Note: All retorts must include verifiable and reproducible performance traces, or I’m just going to delete them.

Clustering

For real performance, don’t cluster virtual machines. Use fast internal or direct-attached SSDs. Cluster for redundancy, not performance. Use application-level redundancy techniques instead of relying on Hyper-V clustering.

In the modern cloud era, though, most software doesn’t have its own redundancy and host clustering is nearly a requirement. Follow these best practices:

  1. Validate your cluster. You may not need to fix every single warning, but be aware of them.
  2. Follow our article on host tuning. Especially pay attention to the bits on caching storage. It includes a link to enable CSV caching.
  3. Remember your initial density target. Add as many nodes as necessary to maintain that along with sufficient extra nodes for failure protection.
  4. Use the same hardware in each node. You can mix hardware, but CPU compatibility mode and mismatched NUMA nodes will have at least some impact on performance.
  5. For Hyper-V, every cluster node should use a minimum of two separate IP endpoints. Each IP must exist in a separate subnet. This practice allows the cluster to establish multiple simultaneous network streams for internode traffic.
    • One of the addresses must be designated as a “management” IP, meaning that it must have a valid default gateway and register in DNS. Inbound connections (such as your own RDP and PowerShell Remoting) will use that IP.
    • None of the non-management IPs should have a default gateway or register in DNS.
    • One alternative IP endpoint should be preferred for Live Migration. Cascade Live Migration preference order through the others, ending with the management IP. You can configure this setting most easily in Failover Cluster Manager by right-clicking on the Networks node.
    • Further IP endpoints can be used to provide additional pathways for cluster communications. Cluster communications include the heartbeat, cluster status and update messages, and Cluster Shared Volume information and Redirected Access traffic.
    • You can set any adapter to be excluded from cluster communications but included in Live Migration in order to enforce segregation. Doing so generally does not improve performance, but may be desirable in some cases.
    • You can use physical or virtual network adapters to host cluster IPs.
    • The IP for each cluster adapter must exist in a unique subnet on that host.
    • Each cluster node must contain an IP address in the same subnet as the IPs on other nodes. If a node does not contain an IP in a subnet that exists on other nodes, then that network will be considered “partitioned” and the node(s) without a member IP will be excluded from that network.
    • If the host will connect to storage via iSCSI, segregate iSCSI traffic onto its own IP(s). Exclude it/them from cluster communications and Live Migration. Because they don’t participate in cluster communications, it is not absolutely necessary that they be placed into separate subnets. However, doing so will provide some protection from network storms.
  6. If you do not have RDMA-capable physical adapters, Compression usually provides the best Live Migration performance.
  7. If you do have RDMA-capable physical adapters, SMB usually provides the best Live Migration performance.
  8. I don’t recommend spending time tinkering with the metric to shape CSV traffic anymore. It utilizes SMB, so the built-in SMB multi-channel technology can sort things out.

Virtual Machines

The preceding guidance obliquely covers several virtual machine configuration points (check the CPU and the memory sections). We have a few more:

  1. Don’t use Shielded VMs or BitLocker. The encryption and VMWP hardening incur overhead that will hurt performance. The hit is minimal — but this article is about performance.
  2. If you have 1) VMs with very high inbound networking needs, 2) physical NICs >= 10GbE, 3) VMQ enabled, 4) spare CPU cycles, then enable RSS within the guest operating systems. Do not enable RSS in the guest OS unless all of the preceding are true.
  3. Do not use the legacy network adapter in Generation 1 VMs any more than absolutely necessary.
  4. Utilize checkpoints rarely and briefly. Know the difference between standard and “production” checkpoints.
  5. Use time synchronization appropriately. Meaning, virtual domain controllers should not have the Hyper-V time synchronization service enabled, but all other VMs should (generally speaking). The hosts should pull their time from the domain hierarchy. If possible, the primary domain controller should be pulling from a secured time source.
  6. Keep Hyper-V guest services up-to-date. Supported Linux systems can be updated via kernel upgrades/updates from their distribution repositories. Windows 8.1+ and Windows Server 2012 R2+ will update from Windows Update.
  7. Don’t do full defragmentation in the guests, either. Seriously. We’re administering multi-spindle server equipment here, not displaying a progress bar to someone with a 5400-RPM laptop drive so that they feel like they’re accomplishing something.
  8. If the virtual machine’s primary purpose is to run an application that has its own replication technology, don’t use Hyper-V Replica. Examples: Active Directory and Microsoft SQL Server. Such applications will replicate themselves far more efficiently than Hyper-V Replica.
  9. If you’re using Hyper-V Replica, consider moving the VMs’ page files to their own virtual disk and excluding it from the replica job. If you have a small page file that doesn’t churn much, that might cost you more time and effort than you’ll recoup.
  10. If you’re using Hyper-V Replica, enable compression if you have spare CPU but leave it disabled if you have spare network. If you’re not sure, use compression.
  11. If you are shipping your Hyper-V Replica traffic across an encrypted VPN or keeping its traffic within secure networks, use Kerberos. SSL en/decryption requires CPU. Also, the asymmetrical nature of SSL encryption causes the encrypted data to be much larger than its decrypted source.

Monitoring

You must monitor your systems. Monitoring is not and has never been, an optional activity.

  1. Be aware of Hyper-V-specific counters. Many people try to use Task Manager in the management operating system to gauge guest CPU usage, but it just doesn’t work. The management operating system is a special-case virtual machine, which means that it is using virtual CPUs. Its Task Manager cannot see what the guests are doing.
  2. Performance Monitor has the most power of any built-in tool, but it’s tough to use. Look at something like Performance Analysis of Logs (PAL) tool, which understands Hyper-V.
  3. In addition to performance monitoring, employ state monitoring. With that, you no longer have to worry (as much) about surprise events like disk space or memory filling up. I like Nagios, as regular readers already know, but you can select from many packages.
  4. Take periodic performance baselines and compare them to earlier baselines

If you’re able to address a fair proportion of points from this list, I’m sure you’ll see a boost in Hyper-V performance. Don’t forget this list is not exhaustive and I’ll be adding to it periodically to ensure it’s as comprehensive as possible however if you think there’s something missing, let me know in the comments below and you may see the number 95 increase!

For Sale – Scan 3XS 4K Gaming Laptop with 6GB GTX1060

Struggling to find time to use. 9 months old. Fantastic machine. Cost £1479 new.

15.6″ Scan 3XS LG15 Vengeance G-Sync, 6GB GTX 1060, 4K screen, Core i7 6700HQ, 16GB DDR4, 240GB SSD, 1TB HDD, Win 10

The machine is as new.

Price and currency: £850
Delivery: Delivery cost is included within my country
Payment method: BT
Location: Bolton
Advertised elsewhere?: Not advertised elsewhere
Prefer goods collected?: I have no preference

______________________________________________________
This message is automatically inserted in all classifieds forum threads.
By replying to this thread you agree to abide by the trading rules detailed here.
Please be advised, all buyers and sellers should satisfy themselves that the other party is genuine by providing the following via private conversation to each other after negotiations are complete and prior to dispatching goods and making payment:

  • Landline telephone number. Make a call to check out the area code and number are correct, too
  • Name and address including postcode
  • Valid e-mail address

DO NOT proceed with a deal until you are completely satisfied with all details being correct. It’s in your best interest to check out these details yourself.

For Sale – Scan 3XS 4K Gaming Laptop with 6GB GTX1060

Struggling to find time to use. 9 months old. Fantastic machine. Cost £1479 new.

15.6″ Scan 3XS LG15 Vengeance G-Sync, 6GB GTX 1060, 4K screen, Core i7 6700HQ, 16GB DDR4, 240GB SSD, 1TB HDD, Win 10

The machine is as new.

Price and currency: £850
Delivery: Delivery cost is included within my country
Payment method: BT
Location: Bolton
Advertised elsewhere?: Not advertised elsewhere
Prefer goods collected?: I have no preference

______________________________________________________
This message is automatically inserted in all classifieds forum threads.
By replying to this thread you agree to abide by the trading rules detailed here.
Please be advised, all buyers and sellers should satisfy themselves that the other party is genuine by providing the following via private conversation to each other after negotiations are complete and prior to dispatching goods and making payment:

  • Landline telephone number. Make a call to check out the area code and number are correct, too
  • Name and address including postcode
  • Valid e-mail address

DO NOT proceed with a deal until you are completely satisfied with all details being correct. It’s in your best interest to check out these details yourself.

For Sale – Scan 3XS 4K Gaming Laptop with 6GB GTX1060

Struggling to find time to use. 9 months old. Fantastic machine. Cost £1479 new.

15.6″ Scan 3XS LG15 Vengeance G-Sync, 6GB GTX 1060, 4K screen, Core i7 6700HQ, 16GB DDR4, 240GB SSD, 1TB HDD, Win 10

The machine is as new.

Price and currency: £850
Delivery: Delivery cost is included within my country
Payment method: BT
Location: Bolton
Advertised elsewhere?: Not advertised elsewhere
Prefer goods collected?: I have no preference

______________________________________________________
This message is automatically inserted in all classifieds forum threads.
By replying to this thread you agree to abide by the trading rules detailed here.
Please be advised, all buyers and sellers should satisfy themselves that the other party is genuine by providing the following via private conversation to each other after negotiations are complete and prior to dispatching goods and making payment:

  • Landline telephone number. Make a call to check out the area code and number are correct, too
  • Name and address including postcode
  • Valid e-mail address

DO NOT proceed with a deal until you are completely satisfied with all details being correct. It’s in your best interest to check out these details yourself.

For Sale – Scan 3XS Gaming Laptop 15in GeForce GTX 1060

Struggling to find time to use. 8 months old. Fantastic machine. Cost £1479 new.

15.6″ Scan 3XS LG15 Vengeance G-Sync, GTX 1060, 4K screen, Core i7 6700HQ, 16GB DDR4, 240GB SSD, 1TB HDD, Win 10

3XS Gaming Laptop 15in GeForce GTX 1060

Happy for collection.

Price and currency: £875
Delivery: Delivery cost is included within my country
Payment method: BT or PayPal
Location: Bolton
Advertised elsewhere?: Not advertised elsewhere
Prefer goods collected?: I have no preference

______________________________________________________
This message is automatically inserted in all classifieds forum threads.
By replying to this thread you agree to abide by the trading rules detailed here.
Please be advised, all buyers and sellers should satisfy themselves that the other party is genuine by providing the following via private conversation to each other after negotiations are complete and prior to dispatching goods and making payment:

  • Landline telephone number. Make a call to check out the area code and number are correct, too
  • Name and address including postcode
  • Valid e-mail address

DO NOT proceed with a deal until you are completely satisfied with all details being correct. It’s in your best interest to check out these details yourself.

For Sale – Scan 3XS Gaming Laptop 15in GeForce GTX 1060

Struggling to find time to use. 8 months old. Fantastic machine. Cost £1479 new.

15.6″ Scan 3XS LG15 Vengeance G-Sync, GTX 1060, 4K screen, Core i7 6700HQ, 16GB DDR4, 240GB SSD, 1TB HDD, Win 10

3XS Gaming Laptop 15in GeForce GTX 1060

Happy for collection.

Price and currency: £875
Delivery: Delivery cost is included within my country
Payment method: BT or PayPal
Location: Bolton
Advertised elsewhere?: Not advertised elsewhere
Prefer goods collected?: I have no preference

______________________________________________________
This message is automatically inserted in all classifieds forum threads.
By replying to this thread you agree to abide by the trading rules detailed here.
Please be advised, all buyers and sellers should satisfy themselves that the other party is genuine by providing the following via private conversation to each other after negotiations are complete and prior to dispatching goods and making payment:

  • Landline telephone number. Make a call to check out the area code and number are correct, too
  • Name and address including postcode
  • Valid e-mail address

DO NOT proceed with a deal until you are completely satisfied with all details being correct. It’s in your best interest to check out these details yourself.

Learn how to use Java development tooling at JavaOne 2017

Enterprises pursuing a digital transformation strategy need to find a balance between accessing legacy infrastructure while at the same time taking advantage of modern programming tools offered by containers and web services. One approach has been the adoption of web development tooling based on JavaScript’s Node.js, Python and PHP. Recent advancements around container awareness, modularity and lighter-weight Java virtual machines based on Java Enetprise Edition MicroProfile promise to make better use of existing Java development tooling and expertise.

“Developers are building microservices with a lot of different languages, including Node.JS, Python and PHP,” said Mike Lehman, vice president of product management for Oracle. “There is a huge enterprise knowledge base and skills capabilities around Java. But a vast number of emerging and popular languages are being used.”

Reconsider Java for cloud-native

A clear majority of enterprise apps have been built in Java, and Java patterns are widely used to demonstrate new concepts. There is also a rich set of enterprise tooling to support code serviceability, monitoring and operations.

Newer web-focused development languages like PHP, Python and JavaScript can draw on a vast army of web developers. But they don’t necessarily have the same support for enterprise-grade features like security, health checks and code management.

Java Enterprise Edition (EE) does include these features, but it also comes with the burden of a larger VM and app server model. New modularity features being implemented into Java EE based on Project Jigsaw and Java MicroProfile promise to enable enterprises to use their Java expertise and code base for digital transformation initiatives. Lehman said, “I often hear from the development teams that, while they can run with those other languages, having a centricity of one language makes them more productive overall just because of the existing skill sets, knowledge and management.”

Java’s container roots

Mark Little, vice president of software engineering at Red Hat, said, “In some respects, Java has had the concept of a container for years as the JVM [Java virtual machine] at one level and at a higher level as the application container defined by Java EE and implemented by Apache Tomcat or Red Hat [Enterprise Application Platform]. Docker containers provide a higher level of abstraction for combining these with Linux containers because workloads need to move to the cloud efficiently and at a low cost.”

Linux containers are the most convenient, repeatable, reliable and scalable way of doing this. In the short term, the biggest trend will be moving existing Java workloads so they can be deployed efficiently into Linux containers, Little said. That alone will take time and some re-architecting.

However, the JVM was not built with the modern idea of containers in mind. According to Little, “There are some architectural and implementation issues which we know about and which are being worked on.”

Little is also seeing a trend toward stripping down Java EE and other application server containers to core services that can be deployed into Linux containers. There are also Java Development Kit Enhancement Proposals open for improving the way the JVM works with Linux containers.

Use Java EE MicroProfile to streamline

The advent of Java MicroProfile promises to be a big enabler for digital transformation of the enterprise by shrinking the size of Java runtimes that can run inside and outside the enterprise. MicroProfile is important for enterprise architects because, if legacy apps can be transformed into microservices in their own natural environment, digital transformation is easier for developers, testers and operations teams.

This will allow enterprises to take advantage of their existing Java development tooling resources and tooling for creating smaller, nimbler microservice applications that can run across different cloud platforms. This is important since each layer in the application stack adds overhead. Better container interoperability is helping to reduce the overhead from traditional virtual machines. MicroProfile standardization promises to similarly reduce the overhead of traditional JVMs.

Major vendors and enterprise architects are collaborating on the Eclipse MicroProfile open source projects to bring additional APIs that will provide Java EE developers with new features required for implementing robust enterprise apps.

MicroProfile interoperability has been demonstrated across several Java apps servers, including Red Hat WildFly Swarm, IBM WebSphere Liberty, Payara MicroProfile and Tomitribe TomEE. IBM plans to make it easier for developers to code microservices that can be frequently updated and moved between different cloud environments with the recent release of its Open Liberty Java EE and MicroProfile implementation, contributed to the Eclipse Foundation as Eclipse OpenJ9. Ian Robinson, WebSphere chief architect at IBM, said IBM is keen to see Eclipse MicroProfile work well with service mesh infrastructure like Istio.

Enterprise-grade features coming to Java EE MicroProfile

One of the higher-level goals is to get Java developers of legacy apps to reuse their existing tooling in a way that allows them to spend more time writing code and less configuring the underlying infrastructure required to execute and orchestrate application logic. New APIs being baked into MicroProfile just in time for JavaOne 2017 support better configuration, fault tolerance, health metrics, health checks and security via JSON Web Token propagation. These features will enable Java app developers to extend the power of their microservices using well-defined specifications and APIs that work across cloud platforms.

According to Red Hat’s Little, “We are seeing that with Eclipse MicroProfile, specifications are being defined with the understanding that the underlying container platform offers services that are relevant to applications. For example, the MicroProfile Configuration 1.0 specification was defined with the expectation that application configuration is externalized and integrated into the underlying container platform.”

The MicroProfile Fault Tolerance 1.0 specification understands that circuit breaking may be a service offered by an underlying service mesh like Istio. Fault Tolerance was built to offer a clean integration between the two to address a wider set of uses.

In many ways, these improvements should encourage enterprise architects to adopt Java for a wider array of digital transformation initiatives. Oracle’s Lehman observed, “Java remains one of the most popular languages on the Internet, with about 12 million developers and over 21 billion JVMs deployed to the cloud. Many of these developers have a rich skill set, and in many ways, Java is a powerful language for building microservices and driving digital transformation.” 

For Sale – Dell OptiPlex 745

Doing a garage clean up I came across my old Dell OptiPlex 745. Needs to find a new home, came along with Vista and has a serial sticker on the side panel. I have installed Windows 7 but you will require your activation code etc. Overall condition is very good as you can see on the photos. Comes with a separate power supply.

Comes with:

  • Intel Core 2 Duo – 4300 @ 1.80Ghz
  • 4GB of RAM
  • Display Intel Q965/Q963 Express
  • 7 x USB
  • 1 x DVI Port
  • 1 x Serial Port
  • 1 x Ethernet Port
  • 1 x Parallel Port
  • Front and Rear Audio Jacks
  • 250GB Hard Disk
  • DVD RW Rom

After installing Windows 7 there is no drivers that will require updating.

Price and currency: 55
Delivery: Delivery cost is included within my country
Payment method: PPG or BT
Location: Reading
Advertised elsewhere?: Not advertised elsewhere
Prefer goods collected?: I have no preference

______________________________________________________
This message is automatically inserted in all classifieds forum threads.
By replying to this thread you agree to abide by the trading rules detailed here.
Please be advised, all buyers and sellers should satisfy themselves that the other party is genuine by providing the following via private conversation to each other after negotiations are complete and prior to dispatching goods and making payment:

  • Landline telephone number. Make a call to check out the area code and number are correct, too
  • Name and address including postcode
  • Valid e-mail address

DO NOT proceed with a deal until you are completely satisfied with all details being correct. It’s in your best interest to check out these details yourself.

For Sale – Scan 3XS Gaming Laptop 15in GeForce GTX 1060

Struggling to find time to use. 8 months old. Fantastic machine. Cost £1479 new.

15.6″ Scan 3XS LG15 Vengeance G-Sync, GTX 1060, 4K screen, Core i7 6700HQ, 16GB DDR4, 240GB SSD, 1TB HDD, Win 10

3XS Gaming Laptop 15in GeForce GTX 1060

Happy for collection.

Price and currency: £875
Delivery: Delivery cost is included within my country
Payment method: BT or PayPal
Location: Bolton
Advertised elsewhere?: Not advertised elsewhere
Prefer goods collected?: I have no preference

______________________________________________________
This message is automatically inserted in all classifieds forum threads.
By replying to this thread you agree to abide by the trading rules detailed here.
Please be advised, all buyers and sellers should satisfy themselves that the other party is genuine by providing the following via private conversation to each other after negotiations are complete and prior to dispatching goods and making payment:

  • Landline telephone number. Make a call to check out the area code and number are correct, too
  • Name and address including postcode
  • Valid e-mail address

DO NOT proceed with a deal until you are completely satisfied with all details being correct. It’s in your best interest to check out these details yourself.

GE’s Predix platform bolstered by data, domain expertise

BOSTON — CIOs looking for a digital transformation case study will find an ongoing master class at GE. With its Predix platform, which collects and analyzes sensor data from industrial assets such as MRI machines and has been generally available for 18 months, the company is attempting to position itself as the backbone of the industrial internet of things.

But the transformation efforts have been slow to produce results. The company’s earnings are lagging behind industrial competitors — making shareholders uneasy and ultimately leading to the recent departure of CEO Jeff Immelt, digital advocate and proponent of the two-year-old GE Digital.

That was the backdrop for a recent media day event at the company’s temporary headquarters in Boston. Three representatives from GE Digital — Mark Bernardo, vice president of professional services; Mike Varney, senior director of product management; and Jeff Erhardt, vice president of intelligent systems — provided an informal presentation on GE’s Predix platform, the critical role of data and domain expertise for machine learning, and what the future of GE’s young business unit might look like.

Predix platform is key

Immelt was replaced last month by John Flannery, a GE veteran who most recently worked with the company’s healthcare division. One of Flannery’s early tasks as CEO is performing a deep dive into each of GE’s businesses. He plans to complete his audit later this year and present recommendations to investors.

What Flannery’s investigation will mean for the future of the company is yet to be seen. But the representatives from GE Digital said they’ve seen no change in strategy to date and that Immelt’s vision to create the platform for the industrial IoT will likely continue.

In fact, Bernardo, a GE employee for more than 10 years, described reports that GE Digital will need to step up revenue production in 2018 as “normal GE behavior” and not a deviation from strategy.

“Our platform, our application investments, our investment in machine learning, our investment in our talent, the reason why domain expertise is important to us is because we need it in order to generate the outcomes our customers need, and to generate the growth and productivity that we need as a business,” he said. “We are as dependent on this strategy as any of our customers.”

With the mention of machine learning, Bernardo is referring, in part, to GE Digital’s 2016 acquisition of Wise.io, a startup out of Berkeley, Calif., that specialized in predicting customer behavior. That may seem like a far cry from industrial assets, but Erhardt, CEO at Wise.io at the time of acquisition, said the key to solving hard problems like predicting customer or machine behavior hinges on a common, underlying data platform that provides a foundation for application development.

“That’s what Salesforce.com has done,” Erhardt said. GE’s Predix platform is built on the same basic model. Erhardt said Wise.io observed from dealings with customers that a data platform is necessary to successfully scale a company based around machine learning, and that it was one of the reasons why being acquired by GE made sense for the startup.

Data is the new oil pipeline

For Wise.io’s part, its job is to make GE applications intelligent. Doing so generally requires computational power and machine learning algorithms — both of which have become commoditized at this point — as well as the increasingly valuable data and domain expertise, according to Erhardt.

“[Data and domain expertise] are at the forefront of both research and how you apply these intelligent techniques, as well as where you can create value,” he said.

He used GE’s intelligent pipeline integrity services products, which rely on the same basic imaging technology packaged in the healthcare business’s products, as an example. “We stick [them] in an oil pipeline and we use [them] to look for defects and weaknesses indicative of that pipeline potentially blowing up,” Erhardt said.

But the technology captures so much data — Erhardt said roughly a terabyte of images — that it can take highly trained experts months to sort out. The machine learning technology, which he defines as “the ability for computers to mimic human decision-making around a data-driven work flow,” relies on past data and decisions to flag problematic areas at super-human speeds.

“The purpose and the idea behind this is to clean up the noise and allow the people to focus on the highest risk, [the] most uncertain areas,” Erhardt said.

The technology doesn’t replace human decision-making outright. Erhardt said his team is spending a good chunk of its time striking the right balance between automation, augmentation and deference. In the latter case, the system defers to domain experts, who may have decades of experience working with complex industrial assets. Domain experts also help GE’s managed service customers prioritize anomalies surfaced by machine learning technology.

Keeping a human in the loop, in other words, is essential. “What’s really important here — and this is different than the consumer space — the cost of being wrong can be very, very high,” Erhardt said.

It’s another reason why machine learning algorithms have to be well-trained, which requires enormous amounts of data. Instead of relying on data generated by a single pipeline integrity product or even a single customer, the Predix platform enables the company to collect and aggregate data across its customer base — and even across its businesses — in a single location. This gives the machine learning tech plenty of training data to learn with and potentially gives GE Digital the raw material to create new revenue streams.

“We’re looking for commonality across these very powerful business cases that exist within our business. What it then gives us the ability to do is to create these derivative products,” Erhardt said. He cited Google’s 2015 acquisition of Waze, an application that helps users avoid traffic jams by using geolocation driver data, as an example of how companies are using data generated by one application to help power other applications. Waze remains a stand-alone application, but the data shared by drivers is now used for city planning purposes.

“The way that we approach this is if you get the core product right — if you can entice your customers to contribute back more data — you not only make that good but you create opportunities you didn’t know about before,” Erhardt said. “That’s what we’re working on.”