VDI has led an up-and-down existence. Once heralded as the next great evolution in virtualization when it hit the scene more than a decade ago, VDI was eventually labeled a failure by many. In reality, the future of VDI is still very much alive.
To find out just how viable VDI is at this point, it’s important to look at the reasons many people cited in saying the technology was doomed to fail and see what, if anything, has changed.
VDI is cost-prohibitive
In the early days, the overall cost of a virtual desktop could be as much as five times higher than that of a comparable physical desktop, according to some estimates. In fact, virtual desktops were once so ridiculously expensive that they were something of a status symbol for the few organizations that chose to use them in production.
Today, the debate over whether it is less expensive to deploy physical or virtual desktops rages on. VDI vendors are quick to tell potential customers that virtual desktops cost less than physical desktops, while others claim the opposite.
Regardless of whether virtual desktops or physical desktops have a lower total cost of ownership, one thing is certain: The fact that people are even debating which desktop type is less expensive serves as proof that the cost of VDI has dropped tremendously.
The overall cost reduction can be attributed to three main factors:
- Friendlier licensing terms
- Moore’s Law: In the context of VDI, Moore’s Law means IT can add more virtual desktops per host server as the technology matures.
- Purpose-built hyper-converged infrastructure: HCI lowers the cost of long-term operational tasks by combining compute, storage and networking in one piece of hardware.
Enterprise desktops are complex
There is far more to an enterprise desktop — physical or virtual — than just the OS. There are many other components IT must consider. Some of these components include applications, device drivers and user profiles.
At one time, making a change to any one of these or other desktop components could have meant building and deploying a brand-new desktop image. As such, many people considered virtual desktop management to be far too labor-intensive for the technology to ever be practical.
Today, however, there is no reason IT has to base virtual desktops on a single, monolithic and frequently updated image. It is increasingly common for IT to break down virtual desktops into a series of virtualized subcomponents. This approach works similarly to containerization.
For example, IT might virtualize applications so it can maintain them without having to worry about the OS image. Likewise, IT can break off the user profile into a virtualized layer, making it possible to achieve the illusion of persistence without affecting the desktop OS. In any case, layering addresses the challenges of enterprise desktop complexity.
An early VDI implementation might work fine for lightweight apps, such as word processors and spreadsheets, but it would likely have trouble coping with anything graphically or computationally intensive. Even the simple act of playing a YouTube video could bring a VDI deployment to its knees.
Today, VDI performance is generally far better. Yes, Moore’s Law plays into this, but there are also some other factors. For instance, VDI has existed for long enough that vendors have gotten a lot better at performance-tuning their VDI software. Likewise, vendors such as VMware and Microsoft added features designed to ensure performance and stability.
For example, IT can throttle virtual desktops to prevent a user from generating a workload that affects other virtual desktops. In addition, IT can map virtual desktops running graphically intensive workloads to physical graphics processing units to provide graphical performance that is nearly as good as what users would expect on a physical desktop.