Tag Archives: virtualization

Parallels RAS pushes remote work flexibility

The sudden transition to remote work has created a demand for application and desktop virtualization products that, like Parallels Remote Application Server, will work with whatever device an employee has on hand.

Representatives from the application and desktop virtualization vendor said the COVID-19 outbreak has pushed both new and existing customers to seek flexibility as they strive to handle the unprecedented work-from-home situation.

The Parallels Remote Application Server (RAS) software can be deployed on multiple types of devices — from Macs to Chromebooks and from iPads to Android phones. The company released Parallels RAS 17.1 in December 2019, updating provisioning options and including new multi-tenant architecture possibilities.

John UppendahlJohn Uppendahl

John Uppendahl, vice president of communications at Parallels, said the product compares to offerings from Citrix and VMware.

“You can be up and running in less than an hour and deploying virtual Windows desktop applications to any device in less than a day,” Uppendahl said.

Shannon KalvarShannon Kalvar

Shannon Kalvar, research manager at IDC, listed Parallels among the virtual client computing market’s major players in his 2019-2020 vendor assessment, noting that customers praised its ease of management and ability to work across a range of devices. He said the sudden interest in remote work technology is driving up demand for the companies that provide it.

“Everybody’s phone is ringing off the hook,” he said. “Everybody’s flat out.”

A need for flexibility

Victor Fiss, director of sales engineering at Parallels, said COVID-19 drove many of its customers to seek temporary licenses for hundreds of additional employees. Parallels RAS can run on premises, on the Azure and AWS public clouds and in a hybrid environment, he said, giving existing customers flexibility in expanding.

Victor FissVictor Fiss

“A lot of our customers that are running on-prem are now adding 300, 400 users out of the blue because of COVID-19,” he said, adding that hybrid options have been enticing because they provide capacity without affecting the employee’s experience.

With Parallels RAS, he said, deployment is not only fast, according to the vendor, but it also allows for more ways to get work done — like support for native touch gestures in virtual desktop environments.

“If you’re using a mobile device — iOS or Android — you’re not getting a shrunken-down desktop that’s screaming for a keyboard and mouse you don’t have,” Uppendahl said. “Instead, you’re seeing application shortcuts — you can add or remove shortcuts to any application that runs on Windows — and, when you launch it, it will launch in full screen.”

Deploying Parallels

Wayne Hunter, CEO of managed service provider AvTek Solutions, Inc., said he had used Parallels RAS to enable remote work for a client of his. He said that client, a bank, went from zero remote users to 150 in two days.

Wayne HunterWayne Hunter

“The main thing that makes it easy to use is that it’s easy to install, easy to configure, easy to set up,” he said. “You can go from having nothing installed to having a working system up in just a couple hours.”

Hunter said several factors make Parallels RAS advantageous for IT professionals. The product’s ease of deployment and management, he said, would be especially beneficial to small IT teams managing many users.

For end users, Hunter said, the ability of Parallels RAS to work on a variety of devices without hassle was a key selling point.

“It’s just like logging in at their office,” he said, noting that users would find their profiles, desktop backgrounds and files unaffected by remote access. “It’s all there, just like it looked at the office.”

It can be challenging, Hunter noted, to ensure users have a proper device and high-speed internet connection at home to enable remote work. Parallels RAS, he said, eased those concerns.

“The beautiful part of Parallels RAS is [that] it doesn’t take much resources,” he said. “The software is very lightweight, so even some folks who didn’t have very good internet didn’t have any problems.”

An evolution of the virtualization market

Kalvar has spoken of a split in the virtualization market between the hosting of a desktop or application and fuller-featured workspace products. The pandemic’s work-from-home orders have furthered that divide; companies that are just beginning their efforts to change workflows through technology, he said, are more apt to explore traditional virtualization.

“For those [not far along with their business continuity plans], this is going to be an 18-month business continuity disaster,” he said. “If you’re in a continuity situation, and you don’t already have a solution in play — because, if you did, the first thing you would do is try to expand it — I think you’re looking more at the vendors who went down the virtualization side of the road … just because their technology matches up with what you need.”

“What [those] people need is a really fast, really cheap way to get people working from home quickly,” he added.

Kalvar said businesses — especially those just looking to maintain continuity through the crisis — must seek products that are both easy to stand up and manage.

“You have to be flexible, particularly when you’re in that business continuity situation,” he said. “In operations, you’re always looking for good enough, not perfect.”

“You’re looking for, ‘This solution meets enough of my criteria … at the lowest cost,'” he added.

Go to Original Article

Workspot VDI key to engineering firm’s pandemic planning

Like many companies, Southland Industries is working to accelerate its virtualization plans in the face of the coronavirus pandemic.

The mechanical engineering firm, which is based in Garden Grove, Calif., and has seven main offices across the U.S., has been using the Workspot Workstation Cloud virtual desktop service. Combined with Microsoft Azure Cloud, Workspot’s service enables engineers to build design-intensive work at home and enables Southland to keep pace as technology advances. When COVID-19 emerged, the company was transitioning users in the mid-Atlantic states to virtual desktops.

Israel Sumano, senior director of infrastructure at Southland Industries, recently spoke about making the move to virtual desktops and the challenges posed by the current public health crisis.

How did your relationship with Workspot first begin?

Israel SumanoIsrael Sumano

Israel Sumano: We were replicating about 50 terabytes across 17 different locations in the U.S. real-time, with real-time file launches. It became unsustainable. So over the last five years, I’ve tested VDI solutions — Citrix, [VMware] Horizon, other hosted solutions, different types of hardware. We never felt the performance was there for our users.

When Workspot came to us, I liked it because we were able to deploy within a week. We tested it on on-prem hardware, we tested it on different cloud providers, but it wasn’t until we had Workspot on [Microsoft] Azure that we were comfortable with the solution.

For us to build our own GPU-enabled VDI systems [needed for computing-intensive design work], we probably would have spent about $4 million, and they would have been obsolete in about six years. By doing it with Microsoft, we were able to deploy the machines and ensure they will be there and upgradeable. If a new GPU comes out, we can upgrade to the new GPU and it won’t be much cost to us to migrate.

How has your experience in deploying Workspot been so far? What challenges have you met?

Sumano: It was a battle trying to rip the PCs from engineers’ hands. They had a lot of workstations [and] they really did not want to give them up. We did the first 125 between October 2017 and February 2018. … That pushed back the rest of the company by about a year and a half. We didn’t get started again until about October of 2019. By that time, everyone had settled in, and they all agreed it was the best thing we’ve ever done and we should push forward. That’s coming from the bottom up, so management is very comfortable now doing the rest of the company.

How did you convince workers that the virtualization service was worthwhile?

Sumano: They were convinced when they went home and were able to work, or when they were in a hotel room and they were able to work. When they were at a soccer match for their kids, and something came up that needed attention right away, they pulled out their iPads and were able … to manipulate [designs] or check something out. That’s when it kicked in.

In the past, when they went to a job site, [working] was a really bad experience. We invested a lot of money into job sites to do replication [there].

[With Workspot,] they were able to pick up their laptops, go to the job site and work just like they were at the office.

The novel coronavirus has forced companies to adopt work-at-home policies. What is Southland’s situation?

Sumano: We have offices in Union City [California], which is Marin County, and they were ordered to stay in place, so everyone was sent home there. We just got notice that Orange County will be sent home. Our Las Vegas offices have also been sent home.

Our job sites are still running, but having this solution has really changed the ability for these engineers to go home and work. Obviously, there’s nothing we can do about the shops — we need to have people on-hand at the shop, [as] we’re not fully automated at that level.

On the construction site, we need guys to install [what Southland has designed]. Those are considered critical by the county. They’re allowed to continue work at the job sites, but everybody from the offices has been set home, and they’re working from home.

We hadn’t done the transition for the mid-Atlantic division to Workspot. We were planning on finishing that in the next 10 weeks. We are now in a rush and plan on finishing it by next Friday. We’re planning on moving 100 engineers to Workspot, so they’re able to go home.

How has it been, trying to bring many workers online quickly?

Sumano: I’ve been doing this a long time. I’ve implemented large virtual-desktop and large Citrix environments in the past. It’s always been a year to a year-and-a-half endeavor.

We are rushing it for the mid-Atlantic. We’d like to take about 10 weeks to do it — to consolidate servers and reduce footprint. We’re skipping all those processes right now and just enacting [virtualization] on Azure, bringing up all the systems as-is and then putting everyone onto those desktops.

Has the new remote-work situation been a strain on your company’s infrastructure?

Sumano: The amount of people using it is exactly the same. We haven’t heard any issues about internet congestion — that’s always a possibility with more and more people working from home. It’s such a small footprint, the back-and-forth chatter between Workspot and your desktop, that it shouldn’t be affected much.

What’s your level of confidence going forward, given that this may be a protracted situation?

Sumano: We’re very confident. We planned on being 100% Azure-based by December 2020. We’re well on track for doing that, except for, with what’s happening right now, there was a bit of a scramble to get people who didn’t have laptops [some] laptops. There’s a lot of boots on the ground to get people able to work from home.

Most of our data is already on Azure, so it’s a very sustainable model going forward, unless there’s a hiccup on the internet.

Editor’s note: This interview has been edited for clarity and length.

Go to Original Article

AtScale’s Adaptive Analytics 2020.1 a big step for vendor

With data virtualization for analytics at scale a central tenet, AtScale’s Adaptive Analytics 2020.1 platform was unveiled on Wednesday.

The release marks a significant step for AtScale, which specializes in data engineering by serving as a conduit between BI tools and stored data. Not only is it a rebranding of the vendor’s platform — its most recent update was called AtScale 2019.2 and was rolled out in July 2019 — but it also marks a leap in its capabilities.

Previously, as AtScale — based in San Mateo, Calif., and founded in 2013 — built up its capabilities its focus was on how to get big data to work for analytics, said co-founder and vice president of technology David Mariani. And while AtScale did that, it left the data where it was stored and queried one source at a time.

With AtScale’s Adaptive Analytics 2020.1 — available in general release immediately — users can query multiple sources simultaneously and get their response almost instantaneously due to augmented intelligence and machine learning capabilities. In addition, based on their query, their data will be autonomously engineered.

“This is not just an everyday release for us,” Mariani said. “This one is different. With our arrival in the data virtualization space we’re going to disrupt and show its true potential.”

Dave Menninger, analyst at Ventana Research, said that Adaptive Analytics 2020.1 indeed marks a significant step for AtScale.

“This is a major upgrade to the AtScale architecture which introduces the autonomous data engineering capabilities,” he said. “[CTO] Matt Baird and team have completely re-engineered the product to incorporate data virtualization and machine learning to make it easier and faster to combine and analyze data at scale. In some ways you could say they’ve lived up to their name now.”

This is not just an everyday release for us. This one is different. With our arrival in the data virtualization space we’re going to disrupt and show its true potential.
David MarianiCo-founder and vice president of technology, AtScale

AtScale has also completely re-engineered its platform, abandoning its roots in Hadoop, to serve both customers who store their data in the cloud and those who keep their data on premises.

“It’s not really about where the AtScale technology runs,” Menninger said. “Rather, they make it easy to work with cloud-based data sources as well as on premises data sources. This is a big change from their Hadoop-based, on-premises roots.”

AtScale’s Adaptive Analytics 2020.1 includes three main features: Multi-Source Intelligent Data Model, Self-Optimizing Query Acceleration Structures and Virtual Cube Catalog.

Multi-Source Intelligent Data Model is a tool that enables users to create logical data models through an intuitive process. It simplifies data modeling by rapidly assembling the data needed for queries, and then maintains its acceleration structures even as workloads increase.

Self-Optimizing Query Acceleration Structures, meanwhile, allow users to add information to their queries without having to re-aggregate the data over and over.

An organization's internet sales data is displayed on a sample AtScale dashboard.
A sample AtScale dashboard shows an organization’s internet sales data.

And Virtual Cube Catalog is a means of speeding up discoverability with lineage and metadata search capabilities that integrate natively into existing data catalogs. This enables business users and data scientists to locate needed information for whatever their needs may be, according to AtScale.

“The self-optimizing query acceleration provides a key part of the autonomous capabilities,” Menninger said. “Performance tuning big data queries can be difficult and time-consuming. However, it’s the combination of the three capabilities which really makes AtScale stand out.”

Other vendors are attempting to offer similar capabilities, but AtScale’s Adaptive Analytics 2020.1 packages them in a unique way, he added.

“There are competitors offering data virtualization and competitors offering cube-based data models, but AtScale is unique in the way they combine these capabilities with the automated query acceleration,” Menninger said.

Beyond offering a platform that enables data virtualization at scale, speed and efficiency are other key tenets of AtScale’s update, Mariani said. “Data virtualization can now be used to improve complexity and cost,” Mariani said.

Go to Original Article

Hyper-V Powering Windows Features

December 2019

Hyper-V is Microsoft’s hardware virtualization technology that initially released with Windows Server 2008 to support server virtualization and has since become a core component of many Microsoft products and features. These features range from enhancing security to empowering developers to enabling the most compatible gaming console. Recent additions to this list include Windows Sandbox, Windows Defender Application Guard, System Guard and Advanced Threat Detection, Hyper-V Isolated-Containers, Windows Hypervisor Platform and Windows Subsystem for Linux 2. Additionally, applications using Hyper-V, such as Kubernetes for Windows and Docker Desktop, are also being introduced and improved.

As the scope of Windows virtualization has expanded to become an integral part of the operating system, many new OS capabilities have taken a dependency on Hyper-V. Consequently, this created compatibility issues with many popular third-party products that provide their own virtualization solutions, forcing users to choose between applications or losing OS functionality. Therefore, Microsoft has partnered extensively with key software vendors such as VMware, VirtualBox, and BlueStacks to provide updated solutions that directly leverage Microsoft virtualization technologies, eliminating the need for customers to make this trade-off.

Windows Sandbox is an isolated, temporary, desktop environment where you can run untrusted software without the fear of lasting impact to your PC.  Any software installed in Windows Sandbox stays only in the sandbox and cannot affect your host. Once Windows Sandbox is closed, the entire state, including files, registry changes and the installed software, are permanently deleted. Windows Sandbox is built using the same technology we developed to securely operate multi-tenant Azure services like Azure Functions and provides integration with Windows 10 and support for UI based applications.

Windows Defender Application Guard (WDAG) is a Windows 10 security feature introduced in the Fall Creators Update (Version 1709 aka RS3) that protects against targeted threats using Microsoft’s Hyper-V virtualization technology. WDAG augments Windows virtualization based security capabilities to prevent zero-day kernel vulnerabilities from compromising the host operating system. WDAG also enables enterprise users of Microsoft Edge and Internet Explorer (IE) protection from zero-day kernel vulnerabilities by isolating a user’s untrusted browser sessions from the host operating system. Security conscious enterprises use WDAG to lock down their enterprise host while allowing their users to browse non-enterprise content.

Application Guard isolates untrusted sites using a new instance of Windows at the hardware layer.

In order to protect critical resources such as the Windows authentication stack, single sign-on tokens, the Windows Hello biometric stack, and the Virtual Trusted Platform Module, a system’s firmware and hardware must be trustworthy. Windows Defender System Guard reorganizes the existing Windows 10 system integrity features under one roof and sets up the next set of investments in Windows security. It’s designed to make these security guarantees:

  • To protect and maintain the integrity of the system as it starts up
  • To validate that system integrity has truly been maintained through local and remote attestation

Detecting and stopping attacks that tamper with kernel-mode agents at the hypervisor level is a critical component of the unified endpoint protection platform in Microsoft Defender Advanced Threat Protection (Microsoft Defender ATP). It’s not without challenges, but the deep integration of Windows Defender Antivirus with hardware-based isolation capabilities allows the detection of artifacts of such attacks.

Hyper-V plays an important role in the container development experience on Windows 10. Since Windows containers require a tight coupling between its OS version and the host that it runs on, Hyper-V is used to encapsulate containers on Windows 10 in a transparent, lightweight virtual machine. Colloquially, we call these “Hyper-V Isolated Containers”. These containers are run in VMs that have been specifically optimized for speed and efficiency when it comes to host resource usage. Hyper-V Isolated Containers most notably allow developers to develop for multiple Linux distros and Windows at the same time and are managed just like any container developer would expect as they integrate with all the same tooling (e.g. Docker).

The Windows Hypervisor Platform (WHP) adds an extended user-mode API for third-party virtualization stacks and applications to create and manage partitions at the hypervisor level, configure memory mappings for the partition, and create and control execution of virtual processors. The primary value here is that third-party virtualization software (such as VMware) can co-exist with Hyper-V and other Hyper-V based features. Virtualization-Based Security (VBS) is a recent technology that has enabled this co-existence.

WHP provides an API similar to that of Linux’s KVM and macOS’s Hypervisor Framework, and is currently leveraged on projects by QEMU and VMware.

This diagram provides a high-level overview of a third-party architecture.

WSL 2 is the newest version of the architecture that powers the Windows Subsystem for Linux to run ELF64 Linux binaries on Windows. Its feature updates include increased file system performance as well as added full system call compatibility. This new architecture changes how these Linux binaries interact with Windows and your computer’s hardware, but still provides the same user experience as in WSL 1 (the current widely available version). The main difference being that WSL 2 uses a new architecture, which is primarily running a true Linux kernel inside a virtual machine. Individual Linux distros can be run either as a WSL 1 distro, or as a WSL 2 distro, can be upgraded or downgraded at any time, and can run WSL 1 and WSL 2 distros side by side.

Kubernetes started officially supporting Windows Server in production with the release of Kubernetes version 1.14 (in March 2019). Windows-based applications constitute a large portion of the workloads in many organizations. Windows containers provide a modern way for these Windows applications to use DevOps processes and cloud native patterns. Kubernetes has become the de facto standard for container orchestration; hence this support enables a vast ecosystem of Windows applications to not only leverage the power of Kubernetes, but also to leverage the robust and growing ecosystem surrounding it. Organizations with investments in both Windows-based applications and Linux-based applications no longer need to look for separate orchestrators to manage their workloads, leading to increased operational efficiencies across their deployments. The engineering that supported this release relied upon open source and community led approaches that originally brought Windows Server containers to Windows Server 2016.

These components and tools have allowed Microsoft’s Hyper-V technology to introduce new ways of enabling customer experiences. Windows Sandbox, Windows Defender Application Guard, System Guard and Advanced Threat Detection, Hyper-V Isolated-Containers, Windows Hypervisor Platform and Windows Subsystem for Linux 2 are all new Hyper-V components that ensure the security and flexibility customers should expect from Windows. The coordination of applications using Hyper-V, such as Kubernetes for Windows and Docker Desktop also represent Microsoft’s dedication to customer needs, which will continue to stand for our main sentiment going forward.

Go to Original Article
Author: nickeaton

Virtualization-Based Security: Enabled by Default

Virtualization-based Security (VBS) uses hardware virtualization features to create and isolate a secure region of memory from the normal operating system. Windows can use this “virtual secure mode” (VSM) to host a number of security solutions, providing them with greatly increased protection from vulnerabilities in the operating system, and preventing the use of malicious exploits which attempt to defeat operating systems protections.

The Microsoft hypervisor creates VSM and enforces restrictions which protect vital operating system resources, provides an isolated execution environment for privileged software and can protect secrets such as authenticated user credentials. With the increased protections offered by VBS, even if malware compromises the operating system kernel, the possible exploits can be greatly limited and contained because the hypervisor can prevent the malware from executing code or accessing secrets.

The Microsoft hypervisor has supported VSM since the earliest versions of Windows 10. However, until recently, Virtualization-based Security has been an optional feature that is most commonly enabled by enterprises. This was great, but the hypervisor development team was not satisfied. We believed that all devices running Windows should have Microsoft’s most advanced and most effective security features enabled by default. In addition to bringing significant security benefits to Windows, achieving default enablement status for the Microsoft hypervisor enables seamless integration of numerous other scenarios leveraging virtualization. Examples include WSL2, Windows Defender Application Guard, Windows Sandbox, Windows Hypervisor Platform support for 3rd party virtualization software, and much more.

With that goal in mind, we have been hard at work over the past several Windows releases optimizing every aspect of VSM. We knew that getting to the point where VBS could be enabled by default would require reducing the performance and power impact of running the Microsoft hypervisor on typical consumer-grade hardware like tablets, laptops and desktop PCs. We had to make the incremental cost of running the hypervisor as close to zero as possible and this was going to require close partnership with the Windows kernel team and our closest silicon partners – Intel, AMD, and ARM (Qualcomm).

Through software innovations like HyperClear and by making significant hypervisor and Windows kernel changes to avoid fragmenting large pages in the second-level address translation table, we were able to dramatically reduce the runtime performance and power impact of hypervisor memory management. We also heavily optimized hot hypervisor codepaths responsible for things like interrupt virtualization – taking advantage of hardware virtualization assists where we found that it was helpful to do so. Last but not least, we further reduced the performance and power impact of a key VSM feature called Hypervisor-Enforced Code Integrity (HVCI) by working with silicon partners to design completely new hardware features including Intel’s Mode-based execute control for EPT (MBEC), AMD’s Guest-mode execute trap for NPT (GMET), and ARM’s Translation table stage 2 Unprivileged Execute-never (TTS2UXN).

I’m proud to say that as of Windows 10 version 1903 9D, we have succeeded in enabling Virtualization-based Security by default on some capable hardware!

The Samsung Galaxy Book2 is officially the first Windows PC to have VBS enabled by default. This PC is built around the Qualcomm Snapdragon 850 processor, a 64-bit ARM processor. This is particularly exciting for the Microsoft hypervisor development team because it also marks the first time that enabling our hypervisor is officially supported on any ARM-based device.

Keep an eye on this blog for announcements regarding the default-enablement of VBS on additional hardware and in future versions of Windows 10.

Go to Original Article
Author: brucesherwin

Top 10 virtualization strategies of 2019

2019 has seen a vast amount of new innovations and strategies in the ever-evolving field of virtualization. From cost-saving techniques and multi-hypervisor management strategies to hybrid cloud practices and VM-container integration, virtualization admins now have a wide collection of strategies to help guide them into a new year of IT.

As Windows Server 2008 end of life draws near, data centers become more complex and cyber threats loom on the horizon, admins must be ready to tackle any new IT challenges that come their way. Review our top ten virtualization strategies of 2019 to ensure successful data center and virtual environment operations.

1.     Virtualization cost-saving strategies

IT administrators responsible for maintaining a virtualization infrastructure must budget carefully and take into consideration several cost-saving strategies. One of the most cost-effective virtualization strategies is reviewing licenses to remove or update wasted licenses, such as subscriptions for overlapping tools, to reduce costs.

In addition, admins should consider increasing the density of host VMs. When admins increase VM density, they don’t have to purchase new servers to accommodate additional workloads. Increasing VM density also reduces licensing costs, and admins can enable dynamic memory to ensure their systems use a host VM’s existing memory efficiently.

2.     Better manage multi-hypervisor environments

Multi-hypervisor environments provide admins with benefits such as reduced licensing costs and flexibility, but managing a multi-hypervisor environment can be challenging. To lessen the complexity of multi-hypervisor management, admins can execute several management strategies, such as carefully choosing multi-hypervisor management tools, learning how to work with several tools simultaneously and implementing security practices and training.

With these strategies, admins can successfully lessen the complexity of multi-hypervisor management as well as balance overall data center costs. By keeping tools and procedures flexible, admins can adjust their processes as they move between different workloads.

3.     Use vCenter maps to keep track of virtual environment components

VMware’s vCenter maps enable admins to keep track of their VMs, hosts, data stores, networks and other virtual environment components and understand how they relate to each other. To use vCenter maps, admins must log in to their vCenter Server using the vSphere Client.

Once there, admins can click on any object in the inventory and select the Maps tab, which then provides an interface to customize information displayed on the map. Within this section, admins can discover a vast amount of information, such as VM location and relationships between VMs, to help reduce the complexity of locating and managing virtual environment components.

4.     Decide between a micro VM, container or full VM

The decision to use a full VM, container or micro VM isn’t as easy as some might assume, because these technologies can coexist within the same data center as well as on the same server. To make the choice easier, admins must consider the ideal use cases for each technology.

VMs are better suited for traditional, monolithic workloads that require a complete and independent server. In contrast, containers focus on scalability and workload mobility and work best for newer software design architectures that don’t require an extensive security presence. Micro VMs benefit some admins because they have a container architecture that retains many of the benefits of VMs, such as security, isolation and VM creation. Admins often use micro VMs if they require additional isolation as well as independent OSes.

5.     Ensure Hyper-V security with a few strategies

To ensure Hyper-V security, admins must use specific security techniques, such as traffic isolation, shielded VMs, Generation 2 VMs and the Secure Boot feature. Admins shouldn’t enable some forms of traffic to traverse their user networks. Rather, they should isolate traffic through a dedicated physical or virtual network segment whenever they can.

In addition, admins can take advantage of shielded and Generation 2 VMs to increase Hyper-V security. Shielded VMs help protect against rogue admins, while Generation 2 VMs are hypervisor-aware and run more efficiently.

Generation 2 VMs also provide security advantages, such as the Secure Boot feature. Secure Boot can prevent malicious code from initializing during a PC’s boot cycle and ensures admins’ devices use only trusted software.

6.     Ensure system health with vSphere monitoring tools

VMware’s vSphere platform provides numerous tools, such as the vSphere Client and Web Client, to monitor vSphere systems and access performance metrics, real-time data usage information, system health data, tracked events and alarms. Admins who monitor their environments can also keep track of historical data throughout the system and manage their storage base.

In addition, the vSphere platform provides utilities for monitoring system components, such as the esxtop and resxtop command-line utilities to reveal real-time ESXi resource usage. These tools provide critical information regarding admins’ data centers, which can help prevent system failures and ensure efficient performance.

7.     Integrate VMs and containers for enhanced benefits

Integrating VMs and containers within the same system enables admins to improve resource utilization, enhance management and maintain a flexible environment for workloads. But integration of these two technologies isn’t always easy, and admins must consider a few practices prior to integration. For example, admins can run containers within VMs because a VM uses its own OS, which can then support a container engine.

But admins shouldn’t deploy a VM in a container because VMs are large, resource-intensive entities. Deploying a VM within a container eliminates the flexibility and mobility admins require from containers. Rather, admins can use tools such as KubeVirt, Rancher VM and Virtlet to take advantage of both container and VM benefits by enabling VMs to work alongside containers on the same interface.

8.     Use hybrid cloud strategies to bridge the on-premises, public cloud gap

The scalability and flexibility of the public cloud has drawn many admins to its computing platform, but some organizations continue to run their workloads on premises. In specific instances, admins might want to extend their on-premises data centers to the public cloud or vice versa. In this case, admins should explore hybrid cloud strategies such as multi-hypervisors, software-defined data centers (SDDC) or products such as Azure Stack, Outposts and Kubernetes.

Several vendors, such as VMware Nutanix and Microsoft, offer products and platforms that seek to simplify admins’ migration journeys and enable them to bridge the gap between on-premises workloads and the public cloud.

9.     Avoid SDDC security risks to better protect your data

An SDDC provides admins with storage and network abstraction with the help of highly automated software. An SDDC also provides admins with a private cloud to better control hosted data. Though an SDDC offers several benefits to admins’ virtual environments, it can present possible security risks. One of the main security risks with an SDDC is product compatibility issues, which can lead to lack of a directory service.

In addition, admins might feel tempted to bypass certificate authentication and security and forgo updating security policies and procedures. Admins must fulfill these steps; otherwise, malicious attacks and data breaches might befall their data centers.

10.  Prepare for the Windows Server 2008 end of life

 The end of support for Windows Server 2008 and 2008 R2 comes as 2019 nears its end. On January 14, 2020, admins must find a new option to run their Windows Server 2008 VMs, because Microsoft no longer provides security updates. However, admins can take the Windows Server 2008 end of life in stride.

First, admins must back up their VMs prior to acting. This ensures that admins don’t lose critical information and enables them to rehearse necessary upgrades in a sandbox environment. Once this is complete, admins can migrate their VMs to the Azure cloud. But if admins must keep their VMs on premises, they should upgrade to a newer OS. 

Go to Original Article

How to Resize Virtual Hard Disks in Hyper-V

We get lots of cool tricks with virtualization. Among them is the ability to change our minds about almost any provisioning decision. In this article, we’re going to examine Hyper-V’s ability to resize virtual hard disks. Both Hyper-V Server (2016+) and Client Hyper-V (Windows 10) have this capability.

An Overview of Hyper-V Disk Resizing

Hyper-V uses two different formats for virtual hard disk files: the original VHD and the newer VHDX. 2016 added a brokered form of VHDX called a “VHD Set”, which follows the same resize rules as VHDX. We can grow both the VHD and VHDX types easily. We can shrink VHDX files with only a bit of work. No supported way exists to shrink a VHD. Once upon a time, a tool was floating around the Internet that would do it. As far as I know, all links to it have gone stale.

You can resize any of Hyper-V’s three layout types (fixed, dynamically expanding, and differencing). However, you cannot resize an AVHDX file (a differencing disk automatically created by the checkpoint function).

Resizing a virtual disk file only changes the file. It does not impact its contents. The files, partitions, formatting — all of that remains the same. A VHD/X resize operation does not stand alone. You will need to perform additional steps for the contents.

Requirements for VHD/VHDX Disk Resizing

The shrink operation must occur on a system with Hyper-V installed. The tools rely on a service that only exists with Hyper-V.

If no virtual machine owns the virtual disk, then you can operate on it directly without any additional steps. Be aware that if a

If a virtual hard disk belongs to a virtual machine, the rules change a bit:

  • If the virtual machine is Off, any of its disks can be resized as though no one owned them
  • If the virtual machine is Saved or has checkpoints, none of its disks can be resized
  • If the virtual machine is Running, then there are additional restrictions for resizing its virtual hard disks

Special Requirements for Shrinking VHDX

Growing a VHDX doesn’t require any changes inside the VHDX. Shrinking needs a bit more. Sometimes, quite a bit more. The resize directions that I show in this article will grow or shrink a virtual disk file, but you have to prepare the contents before a shrink operation. We have another article that goes into detail on this subject.

Can I Resize a Hyper-V Virtual Machine’s Virtual Hard Disks Online?

A very important question: do you need to turn off a Hyper-V virtual machine to resize its virtual hard disks? The answer: sometimes.

  • If the virtual disk in question is the VHD type, then no, it cannot be resized online.
  • If the VM attached the disk in question to its virtual IDE chain, then no, you cannot resize the virtual disk while the virtual machine is online.
  • If the VM attached the disk in question to its virtual SCSI chain, then yes, you can resize the virtual disk while the virtual machine is online.

Resize a Hyper-V Virtual Machine's Virtual Hard Disks Online

Does Online VHDX Resize Work with Generation 1 Hyper-V VMs?

The generation of the virtual machine does not matter for virtual hard disk resizing. If the virtual disk is on the virtual SCSI chain, then you can resize it online.

Does Hyper-V Virtual Disk Resize Work with Linux Virtual Machines?

The guest operating system and file system do not matter. Different guest operating systems might react differently to a resize event, and the steps that you take for the guest’s file system will vary. However, the act of resizing the virtual disk does not change.

Do I Need to Connect the Virtual Disk to a Virtual Machine to Resize It?

Most guides show you how to use a virtual machine’s property sheet to resize a virtual hard disk. That might lead to the impression that you can only resize a virtual hard disk while a virtual machine owns it. Fortunately, you can easily resize a disconnected virtual disk. Both PowerShell and the GUI provide suitable methods.

How to Resize a Virtual Hard Disk with PowerShell

PowerShell is the preferred method for all virtual hard disk resize operations. It’s universal, flexible, scriptable, and, once you get the hang of it, much faster than the GUI.

The cmdlet to use is Resize-VHD:

The VHDX that I used in the sample began life at 20GB. Therefore, the above cmdlet will work as long as I did at least one of the following:

  • Left it unconnected
  • Connected it to a VM’s virtual SCSI controller
  • Turned the connected VM off

Notice the gb suffix on the SizeBytes parameter. PowerShell natively provides that feature; the cmdlet itself has nothing to do with it. PowerShell will automatically translate suffixes as necessary. Be aware that 1kb equals 1,024, not 1,000 (although both b and B both mean “byte”).

Had I used a number for SizeBytes smaller than the current size of the virtual hard disk file, I might have had some trouble. Each VHDX has a specific minimum size dictated by the contents of the file. See the discussion on shrinking at the end of this article for more information. Quickly speaking, the output of Get-VHD includes a MinimumSize field that shows how far you shrink the disk without taking additional actions.

This cmdlet only affects the virtual hard disk’s size. It does not affect the contained file system(s). We will cover that part in an upcoming section.

How to Resize a Disconnected Virtual Hard Disk with Hyper-V Manager

Hyper-V Manager allows you to resize a virtual hard disk whether or not a virtual machine owns it.

  1. From the main screen of Hyper-V Manager, first, select a host in the left pane. All VHD/X actions are carried out by the hypervisor’s subsystems, even if the target virtual hard disk does not belong to a specific virtual machine. Ensure that you pick a host that can reach the VHD/X. If the file resides on SMB storage, delegation may be necessary.
  2. In the far right Actions pane, click Edit Disk.
    Resize a Disconnected Virtual Hard Disk with Hyper-V Manager
  3. The first page is information. Click Next.
  4. Browse to (or type) the location of the disk to edit.
    locate virtual hard disk
  5. The directions from this point are the same as for a connected disk, so go to the next section and pick up at step 6.

Note: Even though these directions specify disconnected virtual hard disks, they can be used on connected virtual disks. All of the rules mentioned earlier apply.

Altaro Dojo Forums
forums logo

Connect with fellow IT pros and master Hyper-V

Moderated by Microsoft MVPs

How to Resize a Virtual Machine’s Virtual Hard Disk with Hyper-V Manager

Hyper-V Manager can also resize virtual hard disks that are attached to virtual machines.

  1. If the virtual hard disk is attached to the VM’s virtual IDE controller, turn off the virtual machine. If the VM is saved, start it. If the VM has checkpoints, remove them.
  2. Open the virtual machine’s Settings dialog.
  3. In the left pane, choose the virtual disk to resize.
  4. In the right pane, click the Edit button in the Media block.
    Resize a Virtual Machine's Virtual Hard Disk with Hyper-V Manager
  5. The wizard will start by displaying the location of the virtual hard disk file, but the page will be grayed out. Otherwise, it will look just like the screenshot from step 4 of the preceding section. Click Next.
  6. Choose to Expand or Shrink the virtual hard disk. Shrink only appears for VHDXs or VHDSs, and only if they have unallocated space at the end of the file. If the VM is off, you will see additional options. Choose the desired operation and click Next.
    edit virtual hard disk wizard
  7. If you chose Expand, it will show you the current size and give you a New Size field to fill in. It will display the maximum possible size for this VHD/X’s file type. All values are in GB, so you can only change in GB increments (use PowerShell if that’s not acceptable).
    expand virtual hard diskIf you chose Shrink (VHDX only), it will show you the current size and give you a New Size field to fill in. It will display the minimum possible size for this file, based on the contents. All values are in GB, so you can only change in GB increments (use PowerShell if that’s not acceptable).
    shrink virtual hard disk
  8. Enter the desired size and click Next.
  9. The wizard will show a summary screen. Review it to ensure accuracy. Click Finish when ready.

The wizard will show a progress bar. That might happen so briefly that you don’t see it, or it may take some time. The variance will depend on what you selected and the speed of your hardware. Growing fixed disks will take some time; shrinking disks usually happens almost instantaneously. Assuming that all is well, you’ll be quietly returned to the screen that you started on.

This change only affects the virtual hard disk’s size. It does not affect the contained file system(s). We will cover that in the next sections.

Following Up After a Virtual Hard Disk Resize Operation

When you grow a virtual hard disk, only the disk’s parameters change. Nothing happens to the file system(s) inside the VHD/X. For a growth operation, you’ll need to perform some additional action. For a Windows guest, that typically means using Disk Management to extend a partition:

After a Virtual Hard Disk Resize Operation

Note: You might need to use the Rescan Disks operation on the Action menu to see the added space.

Of course, you could also create a new partition (or partitions) if you prefer.

Linux distributions have a wide variety of file systems with their own requirements for partitions and sizing. They also have a plenitude of tools to perform the necessary tasks. Perform an Internet search for your distribution and file system.

VHDX Shrink Operations

As previously mentioned, you can’t shrink a VHDX without making changes to the contained file system first. Review our separate article for steps.

What About VHD/VHDX Compact Operations?

I often see confusion between shrinking a VHD/X and compacting a VHD/X. These operations are unrelated. When we talk about resizing, then the proper term for reducing the size of a virtual hard disk is “shrink”. That changes the total allocated space of the contained partitions. “Compact” refers to removing the zeroed blocks of a dynamically expanding VHD/VHDX so that it consumes less space on physical storage. Compact makes no changes to the contained data or partitions. We have an article on compacting VHD/Xs that contain Microsoft file systems and another for compacting VHD/Xs with Linux file systems.

Note: this page was originally published in January 2018 and has been updated to be relevant as of December 2019.

Go to Original Article
Author: Eric Siron

New DataCore vFilO software pools NAS, object storage

DataCore Software is expanding beyond block storage virtualization with new file and object storage capabilities for unstructured data.

Customers can use the new DataCore vFilO to pool and centrally manage disparate file servers, NAS appliances and object stores located on premises and in the cloud. They also have the option to install vFilO on top of DataCore’s SANsymphony block virtualization software, now rebranded as DataCore SDS

DataCore CMO Gerardo Dada said customers that used the block storage virtualization asked for similar capabilities on the file side. Bringing diverse systems under central management can give them a consistent way to provision, encrypt, migrate and protect data and to locate and share files. Unifying the data also paves the way for customers to use tools such as predictive analytics, Dada said.

Global namespace

The new vFilO software provides a scale-out file system for unstructured data and virtualization technology to abstract existing storage systems. A global namespace facilitates unified access to local and cloud-based data though standard NFS, SMB, and S3 protocols. On the back end, vFilO communicates with the file systems through parallel NFS to speed response times. The software separates metadata from the data to facilitate keyword queries, the company said.

Users can set policies at volume or more granular file levels to place frequently accessed data on faster storage and less active data on lower cost options. They can control the frequency of snapshots for data protection, and they can archive data in on-premises or public cloud object storage in compressed and deduplicated format to reduce costs, Dada said.

DataCore’s vFilO supports automated load balancing across the diverse filers, and users can add nodes to scale out capacity and performance. The minimum vFilo configuration for high availability is four virtual machines, with one node managing the metadata services and the other handling the data services, Dada said.

DataCore vFilo screenshot
New DataCore vFilO software can pool and manage disparate file servers, NAS appliances and object stores.

File vs. object storage

Steven Hill, a senior analyst of storage technologies at 451 Research, said the industry in general would need to better align file and object storage moving forward to address the emerging unstructured data crisis.

“Most of our applications still depend on file systems, but metadata-rich object is far better suited to longer-term data governance and management — especially in the context of hybrid IT, where much of the data resident in the cloud is based on efficient and reliable objects, ” Hill said.

“File systems are great for helping end users remember what their data is and where they put it, but not very useful for identifying and automating policy-based management downstream,” Hill added. “Object storage provides a highly-scalable storage model that’s cloud-friendly and supports the collection of metadata that can then be used to classify and manage that data over time.”

DataCore expects the primary use cases for vFilO to include consolidating file systems and NAS appliances. Customers can use vFilo to move unused or infrequently accessed files to cheaper cloud object storage to free up primary storage space. They can also replicate files for disaster recovery.

Eric Burgener, a research vice president at IDC, said unstructured data is a high growth area. He predicts vFilO will be most attractive to the company’s existing customers. DataCore claims to have more than 10,000 customers.

“DataCore customers already liked the functionality, and they know how to manage it,” Burgener said. “If [vFilO] starts to get traction because of its ease of use, then we may see more pickup on the new customer side.”

Camberley Bates, a managing director and analyst at Evaluator Group, expects DataCore to focus on the media market and other industries needing high performance.

Pricing for vFilO

Pricing for vFilO is based on capacity consumption, with a 10 TB minimum order. One- and three-year subscriptions are available, with different pricing for active and inactive data. A vFilO installation with 10 TB to 49 TB of active data costs $345 per TB for a one-year subscription and $904 per TB for a three-year subscription. For the same capacity range of inactive data, vFilo would cost $175 per TB for a one-year subscription and $459 per TB for a three-year subscription. DataCore offers volume discounts for customers with higher capacity deployments.

The Linux-based vFilO image can run on a virtual machine or on commodity bare-metal servers. Dada said DataCore recommends separate hardware for the differently architected vFilO and SANsymphony products to avoid resource contention. Both products have plugins for Kubernetes and other container environments, Dada added.

The vFilO software became available this week as a software-only product, but Dada said the company could add an appliance if customers and resellers express enough interest. DataCore launched a hyper-converged infrastructure appliance for SANsymphony over the summer. 

DataCore incorporated technology from partners and open source projects into the new vFilO software, but Dada declined to identify the sources.

Go to Original Article

Oracle andVMware forge new IaaS cloud partnership

SAN FRANCISCO — VMware’s virtualization stack will be made available on Oracle’s IaaS, in a partnership that underscores changing currents in the public cloud market and represents a sharp strategic shift for Oracle.

Under the pact, enterprises will be able to deploy certified VMware software on Oracle Cloud Infrastructure (OCI), the company’s second-generation IaaS. Oracle is now a member of the VMware Cloud Provider Program and will sell VMware’s Cloud Foundation stack for software-defined data centers, the companies said on the opening day of Oracle’s OpenWorld conference.

Oracle plans to give customers full root access to physical servers on OCI, and they can use VMware’s vCenter product to manage on-premises and OCI-based environments through a single tool.

“The VMware you’re running on-premises, you can lift and shift it to the Oracle Cloud,” executive chairman and CTO Larry Ellison said during a keynote. “You really control version management operations, upgrade time of the VMware stack, making it easy for you to migrate — if that’s what you want to do — into the cloud with virtually no change.”

The companies have also reached a mutual agreement around support, which Oracle characterized with the following statement: “[C]ustomers will have access to Oracle technical support for Oracle products running on VMware environments. … Oracle has agreed to support joint customers with active support contracts running supported versions of Oracle products in Oracle supported computing environments.”

It’s worth noting the careful language of that statement, given Oracle and VMware’s history. While Oracle has become more open to supporting its products on VMware environments, it has yet to certify any for VMware.

Moreover, many customers have found Oracle’s licensing policy for deploying its products on VMware devilishly complex. In fact, a cottage industry has emerged around advisory services meant to help customers keep compliant with Oracle and VMware.

Nothing has changed with regard to Oracle’s existing processor license policy, said Vinay Kumar, vice president of product management for OCI. But the VMware software to be made available on OCI will be through bundled, Oracle-sold SKUs that encompass software and physical infrastructure. Initially, one SKU based on X7 bare-metal instances will be available, according to Kumar.

Oracle and VMware have been working on the partnership for the past nine months, he added. The first SKU is expected to be available within the next six months. Kumar declined to provide details on pricing.

Oracle, VMware relations warm in cloudier days

“It seems like there is a thaw between Oracle and VMware,” said Gary Chen, an analyst at IDC. The companies have a huge overlap in terms of customers who use their software in tandem, and want more deployment options, he added. “Oracle customers are stuck on Oracle,” he said. “They have to make Oracle work in the cloud.”

Gary Chen, Analyst, IDCGary Chen

Meanwhile, VMware has already struck cloud-related partnerships with AWS, IBM, Microsoft and Google, leaving Oracle little choice but to follow. Oracle has also largely ceded the general-purpose IaaS market to those competitors, and has positioned OCI for more specialized tasks as well as core enterprise application workloads, which often run on VMware today.

Massive amounts of on-premises enterprise workloads run on VMware, but as companies look to port them to the cloud, they want to do it in the fastest, easiest way possible, said Holger Mueller, an analyst at Constellation Research in Cupertino, Calif.

The biggest cost of lift-and-shift deployments to the cloud involves revalidation and testing in the new environment, Mueller added.

It seems like there is a thaw between Oracle and VMware.
Gary ChenAnalyst, IDC

But at this point, many enterprises have automated test scripts in place, or even feel comfortable not retesting VMware workloads, according to Mueller. “So the leap of faith involved with deploying a VMware VM on a server in the corporate data center or in a public cloud IaaS is the same,” he said.

In the near term, most customers of the new VMware-OCI service will move Oracle database workloads over, but it will be Oracle’s job to convince them OCI is a good fit for other VMware workloads, Mueller added.

Go to Original Article

DataCore adds new HCI, analytics, subscription price options

Storage virtualization pioneer DataCore Software revamped its strategy with a new hyper-converged infrastructure appliance, cloud-based predictive analytics service and subscription-based licensing option.

DataCore launched the new offerings this week as part of an expansive DataCore One software-defined storage (SDS) vision that spans primary, secondary, backup and archival storage across data center, cloud and edge sites.

For the last two decades, customers have largely relied on authorized partners and OEMs, such as Lenovo and Western Digital, to buy the hardware to run their DataCore storage software. But next Monday, they’ll find new 1U and 2U DataCore-branded HCI-Flex appliance options that bundle DataCore software and VMware vSphere or Microsoft Hyper-V virtualization technology on Dell EMC hardware. Pricing starts at $21,494 for a 1U box, with 3 TB of usable SSD capacity.

The HCI-Flex appliance reflects “the new thinking of the new DataCore,” said Gerardo Dada, who joined the company last year as chief marketing officer.

DataCore software can pool and manage internal storage, as well as external storage systems from other manufacturers. Standard features include parallel I/O to accelerate performance, automated data tiering, synchronous and asynchronous replication, and thin provisioning.

New DataCore SDS brand

In April 2018, DataCore unified and rebranded its flagship SANsymphony software-defined storage and Hyperconverged Virtual SAN software as DataCore SDS. Although the company’s website continues to feature the original product names, DataCore will gradually transition to the new name, said Augie Gonzalez, director of product marketing at DataCore, based in Fort Lauderdale, Fla.

With the product rebranding, DataCore also switched to simpler per-terabyte pricing instead of charging customers based on a-la-carte features, nodes with capacity limits and separate expansion capacity. With this week’s strategic relaunch, DataCore is adding the option of subscription-based pricing.

Just as DataCore faced competitive pressure to add predictive analytics, the company also needed to provide a subscription option, because many other vendors offer it, said Randy Kerns, a senior strategist at Evaluator Group, based in Boulder, Colo. Kerns said consumption-based pricing has become a requirement for storage vendors competing against the public cloud.

“And it’s good for customers. It certainly is a rescue, if you will, for an IT operation where capital is difficult to come by,” Kerns said, noting that capital expense approvals are becoming a bigger issue at many organizations. He added that human nature also comes into play. “If it’s easier for them to get the approvals with an operational expense than having to go through a large justification process, they’ll go with the path of least resistance,” he said.

DataCore SDS
DataCore software-defined storage dashboard

DataCore Insight Services

DataCore SDS subscribers will gain access to the new Microsoft Azure-hosted DataCore Insight Services. DIS uses telemetry-based data the vendor has collected from thousands of SANsymphony installations to detect problems, determine best-practice recommendations and plan capacity. The vendor claimed it has more than 10,000 customers.

Like many storage vendors, DataCore will use machine learning and artificial intelligence to analyze the data and help customers to proactively correct issues before they happen. Subscribers will be able to access the information through a cloud-based user interface that is paired with a local web-based DataCore SDS management console to provide resolution steps, according to Steven Hunt, a director of product management at the company.

DataCore HCI-Flex appliance
New DataCore HCI-Flex appliance model on Dell hardware

DataCore customers with perpetual licenses will not have access to DIS. But, for a limited time, the vendor plans to offer a program for them to activate new subscription licenses. Gonzalez said DataCore would apply the annual maintenance and support fees on their perpetual licenses to the corresponding DataCore SDS subscription, so there would be no additional cost. He said the program will run at least through the end of 2019.

Shifting to subscription-based pricing to gain access to DIS could cost a customer more money than perpetual licenses in the long run.

“But this is a service that is cloud-hosted, so it’s difficult from a business perspective to offer it to someone who has a perpetual license,” Dada said.

Johnathan Kendrick, director of business development at DataCore channel partner Universal Systems, said his customers who were briefed on DIS have asked what they need to do to access the services. He said he expects even current customers will want to move to a subscription model to get DIS.

“If you’re an enterprise organization and your data is important, going down for any amount of time will cost your company a lot of money. To be able to see [potential issues] before they happen and have a chance to fix that is a big deal,” he said.

Customers have the option of three DataCore SDS editions: enterprise (EN) for the highest performance and richest feature set, standard (ST) for midrange deployments, and large-scale (LS) for secondary “cheap and deep” storage, Gonzalez said.

Price comparison

Pricing is $416 per terabyte for a one-year subscription of the ST option, with support and software updates. The cost for a perpetual ST license is $833 per terabyte, inclusive of one year of support and software updates. The subsequent annual support and maintenance fees are 20%, or $166 per year, Gonzalez said. He added that loyalty discounts are available.

The new PSP 9 DataCore SDS update that will become generally available in mid-July includes new features, such as AES 256-bit data-at-rest encryption that can be used across pools of storage arrays, support for VMware’s Virtual Volumes 2.0 technology and UI improvements.

DataCore plans another 2019 product update that will include enhanced file access and object storage options, Gonzalez said.

This week’s DataCore One strategic launch comes 15 months after Dave Zabrowski replaced founder George Teixeira as CEO. Teixeira remains with DataCore as chairman.

“They’re serious about pushing toward the future, with the new CEO, new brand, new pricing model and this push to fulfill more of the software-defined stack down the road, adding more long-term archive type storage,” Jeff Kato, a senior analyst at Taneja Group in West Dennis, Mass., said of DataCore. “They could have just hunkered down and stayed where they were at and rested on their installed base. But the fact that they’ve modernized and gone for the future vision means that they want to take a shot at it.

“This was necessary for them,” Kato said. “All the major vendors now have their own software-defined storage stacks, and they have a lot of competition.”

Go to Original Article