Tag Archives: Virtual

Virtual VeeamON 2020 seeks interactivity, cloud focus

A pioneer in virtual data protection is preparing for its now-virtual user conference.

Though Veeam Software has run virtual events before, VeeamON 2020 was scheduled for Las Vegas in May, but the coronavirus pandemic forced the data protection and management vendor to shift gears. The free virtual edition runs June 17-18.

The backup vendor wants to keep a live atmosphere for its virtual show. Sessions will include live Q&As, and VeeamON 2020 will even include a Veeam party, featuring a performance by Keith Urban.

“It can’t be a death-by-PowerPoint,” Veeam chief marketing officer Jim Kruger said. “You’ve got to mix it up.”

‘New Veeam’ takes on new type of user conference

Veeam has previously hosted a virtual VeeamON in addition to its in-person user conference. Kruger said the company learned from that prior experience and seeks a live component to the virtual show. Along with Q&As, the agenda includes a “VeeamathON” collection of 10 sessions — with live back-and-forth — that highlight different functionality of Veeam products.

“We want to make it as interactive as possible,” Kruger said.

That interactivity is one of the key elements that Christophe Bertrand, senior analyst at Enterprise Strategy Group, will watch for as he attends VeeamON 2020. He said he’ll also look for the quality of content, how it’s presented and accessibility to the content after the show is over.

Though the physical face-to-face interaction is lost, one benefit to a virtual show is being able to view sessions on your own time. Users can easily consume content such as product demos on-demand, Bertrand said. And the vendor can keep users coming back.

Headshot of Veeam chief marketing officer Jim KrugerJim Kruger

Kruger said the sessions will be available on-demand.

“You’re changing the pace you can consume the information,” Bertrand said.

Another benefit is the volume of attendees. While a typical Veeam conference might draw about 2,000 people, 12,000 from 148 countries had registered for VeeamON 2020 as of last week, Kruger said. The vendor is hoping for 15,000.

“It gives us a much broader audience to go after,” Kruger said.

Conference speakers include Veeam CEO William Largent and former Microsoft CFO and CIO John Connors. Session topics range from ransomware resiliency to Office 365 backup to Kubernetes protection.

Judging by the VeeamON 2020 tagline, “Elevate Your Cloud Data Strategy,” the cloud will play a significant role over the two days. Sessions include “AWS and Azure Backup Best Practices,” “Cloud Mobility and Data Portability” and “Current Global Backup Trends & the Future State of Cloud Data Management.”

Veeam plans to include product news at the conference. In recent years, Veeam has expanded its data protection from the virtual focus to physical and cloud support. In May, the vendor launched a partnership with Kasten in container data protection. In February, Veeam launched version 10 of its Availability Suite, featuring enhanced NAS backup and ransomware protection.

It can’t be a death-by-PowerPoint.
Jim KrugerChief marketing officer, Veeam

VeeamON 2020 comes five months after Insight Partners bought the data protection vendor at a $5 billion valuation. At that time, Largent returned to the role of CEO, replacing co-founder Andrei Baronov. Ratmir Timashev, the other founder and also a former Veeam CEO, left his executive vice president position.

“It’s kind of a new Veeam, in a sense,” Bertrand said.

Bertrand listed the cloud, automation and ransomware as key focus areas. He said he’d like to see more from Veeam in intelligent data management — that is, how organizations can reuse data for other purposes such as analytics or DevOps.

Amid the pandemic, vendors should be looking to deliver data protection in a way that uses remote management and automation, Bertrand said. That plays to a lot of vendors’ strengths.

“Veeam has some interesting cards to play,” Bertrand said.

Data protection report cites availability, staff challenges

One topic of discussion at VeeamON 2020 will be the vendor’s recently released “2020 Data Protection Trends” report. It will be part of the keynote and a couple of sessions will discuss its results.

Dave Russell, Veeam vice president of enterprise strategy, said one of the key takeaways from the report is that the “availability gap” still exists. Seventy-three percent of respondents said they either agreed or strongly agreed that there’s a gap between how fast they need applications recovered and how fast they actually recover, according to the report.

“The vast majority aren’t going to be able to meet their companies’ [service-level agreements],” said Russell, co-author of the report with Jason Buffington, vice president of solutions strategy at Veeam.

Headshot of Veeam's Dave RussellDave Russell

Ninety-five percent of organizations suffer unexpected outages and an average outage lasts nearly two hours, according to the report.

Lack of staff to work on new initiatives and lack of budget for new initiatives were the top two data protection challenges, with 42% and 40% of respondents choosing them, respectively.

Veeam commissioned Vanson Bourne to conduct the online survey of 1,550 enterprises from about 20 countries in early 2020. The agency sourced predominantly non-Veeam customers and respondents didn’t know Veeam was behind the report, Russell said.

The research concluded in January, before the pandemic really struck most places. The lack of staff and funding issue will undoubtedly heighten. Layoffs, furloughs and budget cuts have already hit. As a result, simplicity in IT products will be important.

“[Organizations] need a solution that’s going to be very intuitive,” Russell said.

For example, if workers are furloughed or laid off, an organization should be wary of products that require days or weeks of training.

Russell said he thinks cloud use will rise. After the 2008 economic downturn, Russell said he saw organizations holding onto their gear longer, with less refresh cycles, and the cloud was not what it is today.

“I think we will see people embrace cloud and all its various forms,” such as infrastructure and platform as a service, Russell said.

According to the report, 43% of organizations plan to use cloud-based backup managed by a backup-as-a-service provider within the next two years. Thirty-four percent said they anticipate self-managed backup using cloud services as their organization’s primary method of backing up data.

Russell said he hopes IT administrators go to their bosses with the info in this report and show how their organizations can adapt.

“I hope people can use this as ammunition to say, hey, we’re not so different,” Russell said.

Go to Original Article
Author:

SAP: Partners are the key to customer success

Customer success was the main focus of the SAP Global Partner Summit Online, a virtual conference held this week.

SAP Global Partner Summit Online is a gathering of SAP executives, partners and customers who convene to discuss innovations and resources.

Partners are the key to customer success and happiness, said Karl Fahrbach, who was appointed SAP’s first chief partner officer about a year ago. Partners provide a variety of services for SAP customers, including consulting and implementing systems, as well developing and marketing applications built on platforms like SAP Cloud Platform, or extensions to systems like SAP SuccessFactors.

“Customer success means that we recognize that, in order to make our customers successful, we need to do it with our partners,” Fahrbach said. “The role of the partner has changed within SAP. It’s no longer about sales with our reselling partners or implementation with our services partners.”

He stressed that partners are key players in advancing SAP’s idea of the intelligent enterprise, a broad vision of advanced enterprise systems that allow companies to transform old business processes or develop new business models.

The initiative to rely on partners as the driving force for customer success comes from the top levels of SAP, a point SAP CEO Christian Klein emphasized in his streamed keynote address.

“Everyone at SAP has to understand that customer success is not about the point of sale,” Klein said. “It continues across the sales lifecycle, and partners play a vital role in that. So, we have to double down on that.”

Klein vowed that SAP would develop tools and programs to simplify and automate partner interactions.

“We owe our ecosystem a much better experience than in the past,” he said.

Focus on implementation quality

At the summit, SAP unveiled new initiatives and enhancements to existing programs that are designed to help partners better serve SAP customers.

For implementation partners, SAP debuted the new Partner Delivery Quality Framework (PDQF), an initiative designed to help partners implement higher-quality projects faster, Fahrbach said.

Karl Fahrbach, chief partner officer, SAPKarl Fahrbach

The PDQF consists of three components: project delivery, partner skills and post-sales management. The first component looks at project delivery quality and establishes feedback loops to ensure that an implementation is on track and adoption is successful.

“You can see in real time how the implementation is going, what’s being deployed, how the adoption is going, because this is key to see if this customer will be successful or not,” he said. “We’re going to share that information with the partner to make sure that we are transparent, and we support the partner in delivering that quality.”

The second component consists of investments in certifications and skills that partners can use to make sure the project quality is high. The third component focuses on the partner’s post-sales management. An SAP team of partner delivery managers will work with partners’ project managers to deliver quality standards and resolve escalations.

SAP partners will also now have free access to the same testing and demo systems that SAP uses internally to develop and demonstrate projects for customers.

This will enable partners to build applications that integrate various SAP platforms, like S/4HANA, SAP Ariba, SAP SuccessFactors, and SAP S/4HANA Cloud, in a test and demo environment that they previously had to pay for, Fahrbach said.

“They will be able to show end-to-end scenarios of the intelligent enterprise without having any additional costs,” he said. “The partners have been asking if they can get the same environments that SAP uses to do the demos, and now they have free access. This will improve the economics for the partners because it’s free, and the quality of the demos will improve as well.”

A quicker path to validated apps

For independent software vendor (ISV) partners that develop SAP-based applications and extensions, SAP unveiled the Partner Solution Progression framework. The initiative enables partners to quickly develop SAP validated products and make them available on the SAP App Center, an online marketplace for applications and SAP product extensions, according to SAP.

Having apps that are validated and well-supported by SAP can be vital to an ISV’s success, and the Partner Solution Progression framework allows ISVs to gradually advance the technical and business quality of their applications. Once a partner puts a validated app on the SAP App Center, it can grow into the Partner Spotlight program that includes more go-to-market support. If the partner’s strategy and app success continue to improve, the app is eligible to be invited to SAP Endorsed Apps, an SAP premium certification initiative.

Christian Klein, CEO, SAPChristian Klein

The idea is to make it much easier for partners to get applications on the SAP App Center and show that they are valuable innovative products, Klein said.

“Business on the SAP App Center has quadrupled, but it took way too long for partners to become a partner in the App Center and to onboard their solution until they make their first dollar in revenue,” Klein said. “We have significantly improved how you become a partner and how you publish in the App Center.”

COVID-19 concerns addressed

When the COVID-19 crisis began earlier in the year, SAP launched a virtual partner advisory council to examine how the crisis might affect the partners’ business and determine what they need to do to address it, Fahrbach said.

One result was a decision to help partners deal with cash-flow issues and credit access, he said. SAP postponed SAP PartnerEdge program fees until later in the year and will not raise annual maintenance fees. SAP PartnerEdge is a program for ISVs that provides resources to help design, develop and bring applications to market.

“We also launched credit service options to make sure that partners have access to credit and have revised commercial guidelines for the cloud,” Fahrbach said.

To that end, partners can now use a consumption-based pricing model that was previously available only for SAP’s direct salesforce with the Cloud Platform Enterprise Agreement (CPEA), which meters a customer’s use of SAP systems on the SAP Cloud Platform so that they’re charged only for what they use.

“This will provide our partners the ability to be flexible in the way customers consume our software, which is especially important these days with COVID-19, ” Fahrbach said.

Proof will be in the pudding

It’s important that SAP’s messaging on the role of partners is coming directly from recently installed CEO Christian Klein, said Shaun Syvertsen, CEO and managing partner of ConvergentIS, an SAP partner based in Calgary, Alta.

“The idea that Klein has recognized and reinforced with his teams that partners should not feel like SAP services is directly competing with them is important,” Syvertsen said. “Certainly for few years that was a dramatic trend as SAP was really doubling down on services and growing the services teams and sales positioning, so that’s a remarkable shift, and I think it’s a really healthy one.”

SAP partners would often see similar and competing products coming from SAP product management, and it will be interesting to see if this changes, Syvertsen said.

“The idea that an ecosystem matters is something that we’ve heard from Klein over several years, and there has been a tone of being more open to that. So, now we’ll see if some of those behaviors change within the organization to honor some of the investments the partners have made,” he said. “For example, there’s Sodales Solutions [an SAP partner that develops extensions to SAP SuccessFactors]. If SAP comes out with a new module for SuccessFactors that does what Sodales does, that’s not a good sign for anybody. Those are the kinds of things I’m watching for.”

SAP can do more to boost innovative partners

The partner program initiatives are a welcome development for SAP, but they could do even more to highlight smaller niche players that build emerging technology or industry expertise into their applications, said Jon Reed, analyst and co-founder of Diginomica.com, an enterprise applications news and analysis site.

Jon Reed, co-founder, Diginomica.comJon Reed

“This is a time when companies are largely pausing on major software upgrades, but they are eager to extend their platforms with impactful apps and analytics that can get up and running quickly,” Reed said.

Many of SAP’s partners have offerings that fit this bill but do not get enough exposure. Some, like Sodales Solutions, have gained visibility this year, but there needs to be more like that, he said.

Joshua Greenbaum, principal at Enterprise Applications Consulting, agreed that the proof will be in the pudding for SAP’s partner relations.

Joshua Greenbaum, principal, Enterprise Applications ConsultingJoshua Greenbaum

“The spirit is willing in SAP at the top, and we’ll have to wait to see how everything goes,” Greenbaum said. “They are truly dedicated to the proposition that SAP can’t compete without a healthy and vigorous ecosystem, and I think they really mean that, but unfortunately the best practices have not been best for the partners. They’ve been best for SAP in the past, so this is going to be a real wait and see.”

The trajectory path for partners with the Partner Solution Progression framework is perhaps the best development, he said.

“It took a while to articulate the value of having that trajectory to follow to the partners,” he said. “The key is that SAP has to do good by existing partners, but also make it an enticing ecosystem for new partners — and their reputation isn’t that good. With Fahrbach in charge and Klein’s vision, the pieces are there, but these are complicated, inbred cultural behaviors that need to be modified, and that takes time.”

Go to Original Article
Author:

VMs vs. containers: What Windows admins need to know

Virtual machines and containers are both types of virtualized workloads that have more similarities than you may think. Each serves a specific purpose and can significantly increase the performance of your infrastructure — as long as they are employed effectively.

Microsoft unveiled container support in Windows Server 2016, which might have seemed like a novelty feature for many Windows administrators. But now that containers and the surrounding technology — orchestration, networking and storage — has matured on the Windows Server 2019 release, is it time to give containers more thought?

How do you make the decision on when to use VMs vs. containers? Is there a tipping point when you should make a switch? To help steer your decision, let’s cover the three key abilities of containers and virtual machines.

Reliability

When it comes to weighing the options, the difference in reliability is one of the first questions any engineer will ask. Although uptime ultimately depends on the engineers and engineering behind the technology, you can infer a lot about their dependability by analyzing the security and maintenance costs.

VMs. VMs are big, heavyweight, monoliths. This isn’t a comment about speed, because VMs can be blazingly fast. The reason they’re considered monoliths is because each contains a full stack of technology; virtualized hardware, an operating system and even more software are all layered on top of each other in one package.

The advantage of utilizing VMs becomes apparent when you drill down to the hypervisor. VMs have full isolation between themselves and any other VM running on the same hardware or in the same cluster. This is highly secure; you can’t directly attack one VM in a cluster from another.

The other reliability advantage is longevity. People have been using VMs in Windows production environments for about 20 years. There are a large number of engineers with vast amounts of experience managing, deploying and troubleshooting VMs. If an issue with a VM arises, there’s a good chance it’s not a unique occurrence.

Containers. Containers are lightweight and less hardware-intensive because they aren’t running a full suite of software on top of them. Containers can be thought of as wrappers around a process or applications that can run in a stand-alone fashion.

You can run many containers on the same VM; due to this, you don’t have full isolation in containers. You do have process isolation, but it’s not as absolute as it is with a VM. This can cause some difficulties in spinning up and maintaining containers when determining how to parcel out resources.

Additionally, because containers are so relatively new compared to VMs, you might have trouble finding the engineers with a similar amount of career dedication to their management. There are additional technologies to bring in to help with their administration and orchestration, the learning curve to get started is generally seen as higher compared to more traditional technologies, such as VMs.

Scalability

Scalability is the capability of the technology to maximize utilization across your environment. When you’re ready for your application to be accessed by tens of thousands of people, scalability is your friend.

VMs. VMs take a long time to spin up and deploy. Cloud technology such as AWS Auto Scaling and Azure Virtual Machine Scale Sets build out clones of the same VM and load-balance across them. While this is one way to reach scale, it’s a little clunky because of the VM spin-up time.

For a one-off application, VMs can host it and work well, but when it comes to reaching the masses, they can fall short. This is particularly true when attempting to use non-cloud-native automation to scale VMs. The sheer time difference between a VM deployment and a container deployment can cause your automation to go haywire.

Containers. Containers were built for scale. You can spin up one or a hundred new containers in milliseconds, which makes automation and orchestration with native cloud tooling a breeze.

Scale is so innate to containers that the real question with scale is, “How far do you want to go?” You can use IaaS on AWS or Azure using your own Kubernetes orchestration, but you can even take this one step further with the PaaS technologies such as AWS Fargate or Azure Container Instances.

Manageability

Once you have your VMs or containers running in production, you need a way to manage them. Deploying changes, updating software and even rotating technologies all fall under this purview.

VMs. There are scores of third-party tools to manage VMs, such as Puppet, Chef, System Center Configuration Manager and IBM BigFix. Each does software deployment, runs queries on your environment, and even performs more complex desired state configuration tasks. There are also a host of vendor tools to manage your VMs inside VMware, Citrix and Hyper-V.

VMs require care and feeding. Usually when you create a VM, there is a lifecycle it follows from the spin up to its sunset date. In between, it requires maintenance and monitoring. This is contrary to newer methodologies such as DevOps, infrastructure as code and immutable infrastructure. In these paradigms, servers and services are treated like cattle, not pets.

Containers. Orchestration and immutability are the hallmarks of containers. If a container breaks, you kill it and deploy another one without a second thought. There is no backup and restore procedure. Instead of spending time modifying or maintaining your environment, you fix a container by destroying it and creating a new one. VMs, because of the associated time and maintenance costs, simply can’t keep up with containers in this respect.

Containers are tailored for DevOps; containers are a component of the infrastructure that treats developers and infrastructure operators as first-class citizens. By layering the new methodology on new technology, it allows for a faster way to get things done by reducing the complexities tied to workload management.

Which is the way to go?

In the contest of VMs vs. containers, which one wins? The answer depends on your IT team and your use case. There are instances where VMs will continue to have an advantage and others where containers are a better choice. This comparison has just scratched the surface of the technical differences, but there are financial advantages to consider as well.

In a real-world environment, you will likely need both technologies. Monolithic VMs make sense for more solid and stable services such as Active Directory or the Exchange Server platform. For your development team and your homegrown apps utilizing the latest in release pipeline technology, containers will help them get up to speed and scale to the needs of your organization.

Go to Original Article
Author:

Ragnar Locker ransomware attack hides inside virtual machine

Threat actors developed a new type of ransomware attack that uses virtual machines, Sophos revealed Thursday in a blog post.

Sophos researchers recently detected a Ragnar Locker ransomware attack that “takes defense evasion to a new level.” According to the post, the ransomware variant was deployed inside a Windows XP virtual machine in order to hide the malicious code from antimalware detection. The virtual machine includes an old version of the Sun xVM VirtualBox, which is a free, open source hypervisor that was acquired by Oracle when it acquired Sun Microsystems in 2010.

“In the detected attack, the Ragnar Locker actors used a GPO task to execute Microsoft Installer (msiexec.exe), passing parameters to download and silently install a 122 MB crafted, unsigned MSI package from a remote web server,” Mark Loman, Sophos’ director of engineering for threat mitigation, wrote in the post.

The MSI package contained Sun xVM VirtualBox version 3.0.4, which was released August of 2009, and “an image of a stripped-down version of the Windows XP SP3 operating system, called MicroXP v0.82.” In that image is a 49 KB Ragnar Locker executable file.

“Since the vrun.exe ransomware application runs inside the virtual guest machine, its process and behaviors can run unhindered, because they’re out of reach for security software on the physical host machine,” Loman wrote.

This was the first time Sophos has seen virtual machines used for ransomware attacks, Loman said.

It’s unclear how many organizations were affected by this recent attack and how widespread it was. Sophos was unavailable for comment at press time. In the past, the Ragnar Locker ransomware group has targeted managed service providers and used their remote access to clients to infect more organizations.

In other Sophos news, the company published an update Thursday regarding the attacks on Sophos XG Firewalls. Threat actors used a customized Trojan Sophos calls “Asnarök” to exploit a zero-day SQL vulnerability in the firewalls, which the vendor quickly patched through a hotfix. Sophos researchers said the Asnarök attackers tried to bypass the hotfix and deploy ransomware in customer environments. However, Sophos said it took other steps to mitigate the threat beyond the hotfix, which prevented the modified attacks.

Go to Original Article
Author:

Imprivata and Azure AD help healthcare delivery organizations deliver safe and secure care

As hospitals and other healthcare delivery organizations accelerate their adoption of virtual care and mobile devices in response to the COVID-19 outbreak, it’s critical that providers can access cloud and on-premises apps quickly and securely. Imprivata is a healthcare-focused digital identity company that addresses this need. For today’s “Voice of the ISV” blog, I invited Kristina Cairns and Mark Erwich of Imprivata to provide insight into how Imprivata’s solutions are helping healthcare organizations deliver care beyond the four walls of the hospitals.

Supporting healthcare delivery organizations during COVID-19

By Kristina Cairns, Director of Product Marketing, Imprivata and Mark Erwich, VP Marketing, Imprivata

In response to COVID-19, hospitals and clinics have turned to remote tools to care for a surge of patients, while protecting the health of staff. These tools let clinicians connect remotely with patients, care teams, and other organizations, but they can be difficult to securely access from shared workstations or mobile devices, such as tablets. Imprivata digital identity solutions simplify access while maintaining security, so clinicians can deliver quality care safely and conveniently—no matter where they are located.

At the same time, healthcare staffing demands are skyrocketing, and these needs must be met in real time. This can mean quickly adding, or provisioning, new or re-allocated staff and ensuring they have proper access to applications, immediately. Once the crisis is over, these same staff will need to be de-provisioned to ensure security and compliance requirements are met.

Imprivata is a digital identity company that focuses on healthcare. We employ doctors and nurses who have a real-world understanding of the unique needs of hospital environments. Our solutions are designed to work with healthcare workflows and regulations, so hospitals can get up and running with new tools and upgrades, fast. In these challenging times, we’ve partnered with Microsoft to provide an integrated identity and access management platform that meets the needs of healthcare organizations. Our joint solutions make it easy to connect to healthcare’s existing identity and application data and automate at scale. Healthcare providers can use our platforms to address unique demands, such as:

  • Saving precious time in hospitals: Accessing necessary apps quickly while healthcare providers move between clinical workstations and new networked devices at the point of patient care.
  • Protecting healthcare staff and patients: Identifying providers potentially exposed to COVID-19.
  • Scaling up remote work and virtual care: Providing remote access to a diverse set of tools spanning on-premises and cloud infrastructure as providers and patients move outside of traditional healthcare environments.
  • Simplifying role-based access identity management: Securely manage access for temporary workers and existing staff who change roles or departments.

Saving precious time in hospitals

Healthcare workers are busy in the best of times. They juggle administrative tasks with a full day of patient care. As the pandemic has driven up the number of patients admitted to hospitals, time has become even more precious. Imprivata OneSign is a single sign-on (SSO) solution that enables care providers to spend less time with technology and more time with patients.

During a shift, healthcare workers use several cloud and on-premises applications including business and enterprise applications, electronic health records, medical imaging, patient management, and other systems. Each of these apps in this hybrid environment often requires a unique username and password. Imprivata OneSign eliminates the need for clinicians to memorize and manually enter their credentials. They can sign in once to access all their on-premises and cloud apps, including Microsoft Teams, Office 365, and 3,000+ Microsoft Azure Active Directory (Azure AD) Marketplace applications. No Click Access™ lets them sign in with a badge or fingerprint making it faster to access applications and workflows.

Protecting healthcare staff and patients

As healthcare delivery organizations treat patients under evaluation for COVID-19, they must also safeguard the health of clinicians. Yale New Haven Health is using Imprivata OneSign reporting capabilities to identify exactly where and when specific users accessed specific workstations in different patient care zones in the clinical environment. By combining these data with workstation mapping and electronic health record data, Yale can more accurately identify all providers potentially exposed to COVID-19 and take necessary actions.

Scaling up remote work and virtual care

To limit the spread of COVID-19, administrative roles at clinics and hospitals have migrated to remote work when possible. Care providers have rapidly scaled up virtual care services to provide non-emergency healthcare consultations. These providers need to access systems on personal laptops, mobile devices, and temporary devices in temporary care sites. It’s important that devices and individuals are authenticated to protect sensitive data and apps.

Imprivata Confirm ID for Remote Access improves security by enabling multi-factor authentication for remote network access, cloud applications, Windows servers and desktops, and other critical systems and workflows. Imprivata Confirm ID for EPCS (electronic prescribing of controlled substances) supports Drug Enforcement Agency (DEA)-compliant two-factor authentication methods so providers can quickly prescribe drugs using EPCS workflows. To support healthcare organizations during this crisis we are offering Imprivata Confirm ID licenses for free.

 

Simplifying role-based access identity management

As the number of patients increases, hospitals are rapidly re-assigning workers within the organization, while on-boarding clinicians from lower utilized hospitals. Healthcare organizations need easy and secure ways to manage user roles as they scale up and provision temporary workers.

Imprivata Identity Governance is an end-to-end solution with granular, role-based access controls and automated provisioning and de-provisioning. Streamlined auditing processes and analytics enable faster threat evaluation and remediation. These capabilities allow IT to respond to the needs of the organization without sacrificing security. Imprivata Identity Governance ensures that, on day one, the right users have the right access to the right on-premises and cloud applications, and the audit trail to prove it.

Imprivata Identity Governance can now be hosted in an Azure environment, unlocking scalability and flexibility for healthcare enterprises.

Making healthcare technology available to everyone

The following resources can help hospitals and clinics move quickly to support patient care beyond the four walls of the hospitals:

Learn more

Solutions like the Imprivata Identity and Access Management platform, Microsoft Azure AD, and Microsoft Teams are helping keep healthcare workers productive and safe as they confront the current crises. As healthcare evolves, Microsoft and Imprivata will continue to innovate together to further enhance scenarios for in-person and remote access.

Learn more about Microsoft’s COVID-19 response and Imprivata’s COVID-19 response.

Read about capabilities in Teams that support healthcare workers and other integrations between Microsoft and Imprivata.

Go to Original Article
Author: Microsoft News Center

Oracle’s GraalVM finds its place in Java app ecosystem

One year after its initial release for production use, Oracle’s GraalVM universal virtual machine has found validation in the market, evidenced by industry-driven integrations with cloud-native development projects such as Quarkus, Micronaut, Helidon and Spring Boot.

GraalVM supports applications written in Java, JavaScript and other programming languages and execution modes. But it means different things to different people, said Bradley Shimmin, an analyst with Omdia in Longmeadow, Mass.

First, it’s a runtime that can support a wide array of non-Java languages such as JavaScript, Ruby, Python, R, WebAssembly and C/C++, he said. And it can do the same for Java Virtual Machine (JVM) languages as well, namely Java, Scala and Kotlin.

Secondly, GraalVM is a native code generator capable of doing things like ahead-of-time compiling — the act of compiling a higher-level programming language such as C or C++ into a native machine code so that the resulting binary file can execute natively.

“GraalVM is really quite a flexible ecosystem of capabilities,” Shimmin said. “For example, it can run on its own or be embedded as a part of the OpenJDK. In short, it allows Java developers to tackle some specific problems such as the need for fast app startup times, and it allows non-Java developers to enjoy some of the benefits of a JVM such as portability.”

GraalVM came out of Oracle Labs, which used to be Sun Labs. “Basically, it is the answer to the question, ‘What would it look like if we could write the Java native compiler in Java itself?'” said Cameron Purdy, former senior vice president of development at Oracle and current CEO of Xqiz.it, a stealth startup in Lexington, Mass., that is working to deliver a platform for building cloud-native applications.

“The hypothesis behind the Graal implementation is that a compiler built in Java would be more easily maintained over time, and eventually would be compiling itself or ‘bootstrapped’ in compiler parlance,” Purdy added.

The GraalVM project’s overall mission was to build a universal virtual machine that can run any programming language.

The big idea was that a compiler didn’t have to have built-in knowledge of the semantics of any of the supported languages. The common belief of VM architects had been that a language VM needed to understand those semantics in order to achieve optimal performance.

“GraalVM has disproved this notion by demonstrating that a multilingual VM with competitive performance is possible and that the best way to do it isn’t through a language-specific bytecode like Java or Microsoft CLR [Common Language Runtime],” said Eric Sedlar, vice president and technical director of Oracle Labs.

To achieve this, the team developed a new high-performance optimizing compiler and a language implementation framework that makes it possible to add new languages to the platform quickly, Sedlar said. The GraalVM compiler provides significant performance improvements for Java applications without any code changes, according to Sedlar. Embeddability is another goal. For example, GraalVM can be plugged into system components such as a database.

GraalVM joins broader ecosystem

One of the higher-profile integrations for GraalVM is with Red Hat’s Quarkus, a web application framework with related extensions for Java applications. In essence, Quarkus tailors applications for Oracle’s GraalVM and HotSpot compiler, which means that applications written in it can benefit from using GraalVM native image technology to achieve near instantaneous startup and significantly lower memory consumption compared to what one can expect from a typical Java application at runtime.

“GraalVM is interesting to me as it potentially speeds up Java execution and reduces the footprint – both of which are useful for modern Java applications running on the cloud or at the edge,” said Jeffrey Hammond, an analyst at Forrester Research. “In particular, I’m watching the combination of Graal and Quarkus as together they look really fast and really small — just the kind of thing needed for microservices on Java running in a FaaS environment.”

In particular, I’m watching the combination of Graal and Quarkus as together they look really fast and really small — just the kind of thing needed for microservices on Java running in a FaaS environment.
Jeffrey HammondAnalyst, Forrester

Jeffrey HammondJeffrey Hammond

Quarkus uses the open source, upstream GraalVM project and not the commercial products — Oracle GraalVM or Oracle GraalVM Enterprise Edition.

“Quarkus applications can either be run efficiently in JVM mode or compiled and optimized further to run in Native mode, ensuring developers have the best runtime environment for their particular application,” said Rich Sharples, senior director of product management at Red Hat.

Red Hat officials believe Quarkus will be an important technology for two of its most important constituents — developers who are choosing Kubernetes and OpenShift as their strategic application development and production platform and enterprise developers with deep roots in Java.

“That intersection is pretty huge and growing and represents a key target market for Red Hat and IBM,” Sharples said. “It represents organizations across all industries who are building out the next generation of business-critical applications that will provide those organizations with a competitive advantage.”

Go to Original Article
Author:

How to install the Windows Server 2019 VPN

Many organizations rely on a virtual private network, particularly those with a large number of remote workers who need access to resources.

While there are numerous vendors selling their VPN products in the IT market, Windows administrators also have the option to use the built-in VPN that comes with Windows Server. One of the benefits of using Windows Server 2019 VPN technology is there is no additional cost to your organizations once you purchase the license.

Another perk with using a Windows Server 2019 VPN is the integration of the VPN with the server operating system reduces the number of infrastructure components that can break. An organization that uses a third-party VPN product will have an additional hoop the IT staff must jump through if remote users can’t connect to the VPN and lose access to network resources they need to do their jobs.

One relatively new feature in Windows Server 2019 VPN functionality is the Always On VPN, which some users in various message boards and blogs have speculated will eventually replace DirectAccess, which remains supported in Windows Server 2019. Microsoft cites several advantages of Always On VPN, including granular app- and traffic-based rules to restrict network access, support for both RSA and elliptic curve cryptography algorithms, and native Extensible Authentication Protocol support to enable the use of a wider variety of advanced authentication methods.

Microsoft documentation recommends organizations that currently use DirectAccess to check Always On VPN functionality before migrating their remote access processes.

The following transcript for the video tutorial by contributor Brien Posey explains how to install the Windows Server 2019 VPN role. 

In this video, I want to show you how to configure Windows Server 2019 to act as a VPN server.

Right now, I’m logged into a domain joined Windows Server 2019 machine and I’ll get the Server Manager open so let’s go ahead and get started.

The first thing that I’m going to do is click on Manage and then I’ll click on Add Roles and Features.

This is going to launch the Add Roles and Features wizard.

I’ll go ahead and click Next on the Before you begin screen.

For the installation type, I’m going to choose Role-based or feature-based installation and click Next. From there I’m going to make sure that my local server is selected. I’ll click Next.

Now I’m prompted to choose the server role that I want to deploy. You’ll notice that right here we have Remote Access. I’ll go ahead and select that now. Incidentally, in the past, this was listed as Routing and Remote Access, but now it’s just listed as a Remote Access. I’ll go ahead and click Next.

I don’t need to install any additional feature, so I’ll click Next again, and I’ll click Next [again].

Now I’m prompted to choose the Role Services that I want to install. In this case, my goal is to turn the server into a VPN, so I’m going to choose DirectAccess and VPN (RAS).

There are some additional features that are going to need to be installed to meet the various dependencies, so I’ll click Add Features and then I’ll click Next. I’ll click Next again, and I’ll click Next [again].

I’m taken to a confirmation screen where I can make sure that all of the necessary components are listed. Everything seems to be fine here, so I’ll click Install and the installation process begins.

So, after a few minutes the installation process completes. I’ll go ahead and close this out and then I’ll click on the Notifications icon. We can see that some post-deployment configuration is required. I’m going to click on the Open the Getting Started Wizard link.

I’m taken into the Configure Remote Access wizard and you’ll notice that we have three choices here: Deploy both DirectAccess and VPN, Deploy DirectAccess Only and Deploy VPN Only. I’m going to opt to Deploy VPN Only, so I’ll click on that option.

I’m taken into the Routing and Remote Access console. Here you can see our VPN server. The red icon indicates that it hasn’t yet been configured. I’m going to right-click on the VPN server and choose the Configure and Enable Routing and Remote Access option. This is going to open up the Routing and Remote Access Server Setup Wizard. I’ll go ahead and click Next.

I’m asked how I want to configure the server. You’ll notice that the very first option on the list is Remote access dial-up or VPN. That’s the option that I want to use, so I’m just going to click Next since it’s already selected.

I’m prompted to choose my connections that I want to use. Rather than using dial-up, I’m just going to use VPN, so I’ll select the VPN checkbox and click Next.

The next thing that I have to do is tell Windows which interface connects to the internet. In my case it’s this first interface, so I’m going to select that and click Next.

I have to choose how I want IP addresses to be assigned to remote clients. I want those addresses to be assigned automatically, so I’m going to make sure Automatically is selected and click Next.

The next prompt asks me if I want to use a RADIUS server for authentication. I don’t have a RADIUS server in my own organization, so I’m going to choose the option No, use Routing and Remote Access to authenticate connection requests instead. That’s selected by default, so I can simply click Next.

I’m taken to a summary screen where I have the chance to review all of the settings that I’ve enabled. If I scroll through this, everything appears to be correct. I’ll go ahead and click Finish.

You can see that the Routing and Remote Access service is starting and so now my VPN server has been enabled.

View All Videos

Go to Original Article
Author:

How to Customize Hyper-V VMs using PowerShell

In this article, we’ll be covering all the main points for deploying and modifying virtual machines in Hyper-V using PowerShell.

You can create a Hyper-V virtual machine easily using a number of tools. The “easy” tool, Hyper-V Manager’s New Virtual Machine Wizard (and its near-equivalent in Failover Cluster Manager), creates only a basic system. It has a number of defaults that you might not like. If you forget to change something, then you might have to schedule downtime later to correct it. You have other choices for VM creation. Among these, PowerShell gives you the greatest granularity and level of control. We’ll take a tour of the capability at your fingertips. After the tour, we will present some ways that you can leverage this knowledge to make VM creation simpler, quicker, and less error-prone than any of the GUI methods.

Cmdlets Related to Virtual Machine Creation

Of course, you create virtual machines using New-VM. But, like the wizards, it has limits. You will use other cmdlets to finesse the final product into exactly what you need.

The above cmdlets encompass the features needed for a majority of cases. If you need something else, you can start with the complete list of Hyper-V-related cmdlets.

Note: This article was written using the cmdlets as found on Windows Server 2019.

Comparing PowerShell to the Wizards for Virtual Machine Creation

PowerShell makes some things a lot quicker than the GUI tools. That doesn’t always apply to virtual machine creation. You will need to consider the overall level of effort before choosing either approach. Start with an understanding of what each approach asks of you.

The GUI Wizard Outcome

Virtual machine configuration points you can control when creating a virtual machine using the wizard:

  • Name
  • Virtual machine generation
  • Storage location
  • Virtual switch connection
  • The startup memory quantity and whether the VM uses Dynamic Memory
  • Attach or create one VHD or VHDX to the default location
  • Initial boot configuration

If you used the wizard from within Failover Cluster Manager, it will have also added the VM to the cluster’s highly-available roles.

The wizard does fairly well at hitting the major configuration points for a new virtual machine. It does miss some things, though. Most notably, you only get vCPU. Once you finish using the wizard to create a virtual machine, you must then work through the VM’s properties to fix up anything else.

The Windows Admin Center Outcome

Virtual machine configuration points you can control when creating a virtual machine using Windows Admin Center:

  • Name
  • Virtual machine generation
  • The system that will host the VM (if you started VM creation from the cluster panel)
  • Base storage location — one override for both the VM’s configuration files and the VHDX(s)
  • Virtual switch connection
  • The number of virtual processors
  • The startup memory quantity, whether the VM uses Dynamic Memory, and DM’s minimum and maximum values
  • Attach or create one or more VHDs or VHDXs
  • Initial boot configuration

If you used the wizard from within Failover Cluster Manager, it will have also added the VM to the cluster’s highly-available roles.

Windows Admin Center does more than the MMC wizards, making it more likely that you can immediately use the created virtual machine. It does miss a few common configuration points, such as VLAN assignment and startup/shutdown settings.

The PowerShell Outcome

As for PowerShell, we have nothing missed on the outcome. It can do everything. Some parts take a bit more effort. You will often need two or more cmdlets to fully create a VM as desired. Before we demonstrate them, we need to cover the difference between PowerShell and the above GUI methods.

Why Should I Use PowerShell Instead of the GUI to Create a Virtual Machine?

So, if PowerShell takes more work, why would you do it? Well, if you have to create only one or two VMs, maybe you wouldn’t. In a lot of cases, it makes the most sense:

  • One-stop location for the creation of VMs with advanced settings
  • Repeatable VM creation steps
  • More precise control over the creation
  • Access to features and settings unavailable in the GUI

For single VM creation, PowerShell saves you from some double-work and usage of multiple interfaces. You don’t have to run a wizard to create the VM and then dig through property sheets to make changes. You also don’t have to start in a wizard and then switch to PowerShell if you want to change a setting not included in the GUI.

Understanding Permissions for Hyper-V’s Cmdlets

If you run these cmdlets locally on the Hyper-V host as presented in this article, then you must belong to the local Administrators group. I have personally never used the “Hyper-V Administrators” group, ever, just on principle. A Hyper-V host should not do anything else, and I have not personally encountered a situation where it made sense to separate host administration from Hyper-V administration. I have heard from others that membership in the “Hyper-V Administrators” group does not grant the powers that they expect. Your mileage may vary.

Additional Requirements for Remote Storage

If the storage location for your VMs or virtual hard disks resides on a remote system (SMB), then you have additional concerns that require you to understand the security model of Hyper-V’s helper services. Everything that you do with the Hyper-V cmdlets (and GUI tools) accesses a central CIM-based API. These APIs do their work by a two-step process:

  • The Hyper-V host verifies that your account has permission to access the requested API
  • Service on the Hyper-V host carries out the requested action within its security context

By default, these services run as the “Local System” account. They present themselves to other entities on the network as the Hyper-V host’s computer account, not your account. Changing the account that runs the services places you in an unsupported configuration. Just understand that they run under that account and act accordingly.

The Hyper-V host’s computer account must have at least Modify the permission on the remote NTFS/ReFS file system and at least Change on the SMB share.

Additional Requirements for Remote Sessions

If you run these cmdlets remotely, whether explicitly (inside a PSSession) or implicitly (using the ComputerName) parameter, and you do anything that depends on SMB storage (a second hop), then you must configure delegation.

The security points of a delegated operation:

  • The account that you use to run the cmdlet must have administrator privileges on the Hyper-V host
  • The Hyper-V host must allow delegation of credentials to the target location
  • You must configure the target SMB share as indicated in the last sentence of the preceding section

These rules apply whether you allow the commands to use the host’s configured default locations or if you override.

If you need help with the delegation part, we have a script for delegation.

Shortest Possible Use of New-VM

You can run New-VM with only one required parameter:

This creates a virtual machine on the local host with the following characteristics:

  • Name: “demovm”
  • Generation 1
  • 1 vCPU
  • No virtual hard disk
  • Virtual CD/DVD attached to virtual IDE controller 1, location 0
  • Synthetic adapter, not connected to a virtual switch
  • Boots from CD (only bootable device on such a system)

I do not use the cmdlet this way, ever. I personally create only Generation 2 machines now unless I have some overriding reason. You can change all other options after creation. Also, I tend to connect to the virtual switch right away, as it saves me a cmdlet later.

I showed this usage so that you understand the default behavior.

Simple VM Creation in PowerShell

We’ll start with a very basic VM, using simple but verbose cmdlets.

The above creates a new Generation 2 virtual machine in the host’s default location named “demovm” with 2 gigabytes of startup memory and connects it to the virtual switch named “vSwitch”. It uses static memory because New-VM cannot enable Dynamic Memory. It uses one vCPU because New-VM cannot override that. It does not have a VHDX. We can do that with New-VM, but I have a couple of things to note for that, and I wanted to start easy. Yes, you will have to issue more cmdlets to change the additional items, but you’re already in the right place to do that. No need to switch to another screen.

Before we move on to post-creation modifications, let’s look at uncommon creation options.

Create a Simple VM with a Specific Version in PowerShell

New-VM has one feature that the GUI cannot replicate by any means: it can create a VM with a specific configuration version. Without overriding, you can only create VMs that use the maximum supported version of the host that builds the VM. If you will need to migrate or replicate the VM to a host running an older version, then you must use New-VM and specify a version old enough to run on all necessary hosts.

To create the same diskless VM as above, but that can run on a Windows Server 2012 R2 host:

You can see the possible versions and their compatibility levels with  Get-VMHostSupportedVersion.

Be aware that creating a VM with a lower version may have unintended side effects. For instance, versions prior to 8 don’t support hardware thread counts so they won’t have access to Hyper-Threading when running on a Hyper-V host using the core scheduler. You can see the official matrix of VM features by version on the related Microsoft docs page.

Note: New-VM also exposes the Experimental and Prerelease switches, but these don’t work on regular release versions of Hyper-V. These switches create VMs with versions above the host’s normally supported maximum. Perhaps they function on Insider versions, but I have not tried.

Simple VM Creation with Positional Parameters

When we write scripts, we should always type out the full names of parameters. But, if you’re working interactively, take all the shortcuts you like. Let’s make that same “demovm”, but save ourselves a few keystrokes:

SwitchName is the only non-positional parameter that we used. You can tell from the help listing ( Get-Help New-VM):

Each parameter surrounded by double brackets ( [[ParameterName]]) is positional. As long as you supply its value in the exact order that it appears, you do not need to type its name.

In only 43 characters, we have accomplished the same as all but one of the wizard’s tabs. If you want to make it even shorter, the quote marks around the VM and switch names are only necessary if they contain spaces. And, once the cmdlet completes, we can just keep typing to change anything that New-VM didn’t cover.

Create a Simple VM in a Non-Default Path

We can place the VM in a location other than the default with one simple change, but it has a behavioral side effect. First, the cmdlet:

The Path parameter overrides the placement of the VM from the host defaults. It does not impact the placement of any virtual hard disks.

As for the previously mentioned side effect, compare the value of the Path parameter of a VM created using the default (on my system):

The Path parameter of a VM with the overriden path value:

When you do not specify a Path, the cmdlet will place all of the virtual machine’s files in the relevant subfolders of the host’s default path (Virtual Machines, Snapshots, etc.). When you specify a path, it first creates a subfolder with the name of the VM, then creates all those other subfolders inside. As far as I know, all of the tools exhibit this same behavior (I did not test WAC).

Create a VM with a VHDX, Single Cmdlet

To create the VM with a virtual hard disk in one shot, you must specify both the NewVHDPath and NewVHDSizeBytes parameter. NewVHDPath operates somewhat independently of Path.

Start with the easiest usage:

The above cmdlet does very nearly all the same things like the GUI wizard but in one line. It starts by doing the same things as the first simple cmdlet that I showed you. Then  It also creates a VHDX of the specified name. Since this cmdlet only indicates the file name, the cmdlet creates it in the host’s default virtual hard disk storage location. To finish up, it attaches the new disk to the VM’s first disk boot location (IDE 0:0 or SCSI 0:0, depending on VM Generation).

Create a VM with a VHDX, Override the VHDX Storage Location

Don’t want the VHDX in the default location? Just change NewVHDPath so that it specifies the full path to the desired location:

Create a VM with a VHDX, Override the Entire VM Location

Want to change the location of the entire VM, but don’t want to specify the path twice? Override placement of the VM using Path, but provide only the file name for NewVHDPath:

The above cmdlet creates a “demovm” folder in “C:LocalVMs”. It places the virtual machine’s configuration files in a “Virtual Machines” subfolder and places the new VHDX in a “Virtual Hard Disks” subfolder.

Just as before, you can place the VHDX into an entirely different location just by providing the full path.

Notes on VHDX Creation with New-VM

A few points:

  • You must always supply the VHDX’s complete file name. New-VM will not guess at what you want to call your virtual disk, nor will it auto-append the .vhdx extension.
  • You must always supply a .vhdx extension. New-VM will not create a VHD formatted disk.
  • All the rules about second-hops and delegation apply.
  • Paths operate from the perspective of the Hyper-V host. When running remotely, a path like “C:LocalVMs” means the C: disk on the host, not on your remote system.
  • You cannot specify an existing file. The entire cmdlet will fail if the file already exists (meaning that, if you tell it to create a new disk and it cannot for some reason, then it will not create the VM, either).

As with the wizard, New-VM can create only one VHDX and it will always connect to the primary boot location (IDE controller 0 location 0 for Generation 1, SCSI controller 0 location 0 for Generation 2). You can issue additional PowerShell commands to create and attach more virtual disks. We’ll tackle that after we finish with the New-VM cmdlet.

Create a VM with a VHDX, Single Cmdlet, and Specify Boot Order

We have one more item to control with New-VHD: the boot device. Using the above cmdlets, your newly created VM will try to boot to the network first. If you used one of the variants that create a virtual hard disk, a failed network boot will fall through to the disk.

Let’s create a VM that boots to the virtual CD/DVD drive instead:

You have multiple options for the BootDevice parameter:

  • CD: attach a virtual drive and set it as the primary boot device
  • Floppy: set the virtual floppy drive as the primary boot device; Generation 1 only
  • IDE: set IDE controller 0, location 0 as the primary boot device; Generation 1 only
  • LegacyNetworkAdapter: attach a legacy network adapter and set it as the primary boot device; Generation 1 only
  • NetworkAdapter: set the network adapter as the primary boot device on a Generation 2 machine, attach a legacy network adapter and set it as the primary boot device on a Generation 1 machine
  • VHD: if you created a VHDX with New-VM, then this will set that disk as the primary boot device. Works for both Generation types

The BootDevice parameter does have come with a quirk: if you create a VHD and set the VM to boot from CD using New-VM, it will fail to create the VM. It tries to attach both the new VHD and the new virtual CD/DVD drive to the same location. The entire process fails. You will need to create the VHD with the VM, then attach a virtual CD/DVD drive and modify the boot order or vice versa.

Make Quick Changes to a New VM with PowerShell

You have your new VM, but you’d like to make some quick, basic changes. Set-VM includes all the common settings as well as a few rare options.

Adjust Processor and Memory Assignments

From New-VM, the virtual machine starts off with one virtual CPU and does not use static memory. My preferred new virtual machine, in two lines:

Both New-VM and Set-VM include the MemoryStartupBytes parameter. I used it with Set-VM to make the grouping logical.

Some operating systems do not work with Dynamic Memory, some applications do not work with Dynamic Memory, and some vendors (and even some administrators) just aren’t ready for virtualization. In any of those cases, you can do something like this instead:

Technically, you can leave off the StaticMemory parameter in the preceding sequence. New-VM always creates a VM with static memory. Use it when you do not know the state of the VM.

Control Automatic Start and Stop Actions

When a Hyper-V host starts or shuts down, it needs to do something with its VMs. If it belongs to a cluster, it has an easy choice for highly-available VMs: move them. For non-HA VMs, it needs some direction. By default, new VMs will stay off when the host starts and save when the host shuts down. You can override these behaviors:

You can use these parameters with any other parameter on Set-VM, and you do not need to include all three of them. If you use the Nothing setting for AutomaticStartAction or if you do not specify a value for AutomaticStartDelay, then it uses a value of 0. AutomaticStartDelay uses a time value of seconds.

AutomaticStartAction has these options (use [Tab] to cycle through):

  • Nothing: stay off
  • Start: always start with the host, after AutomaticStartDelay seconds
  • StartIfRunning: start the VM with the host after AutomaticStartDelay seconds, but only if it was running when the host shut down

Note: I am aware of what appears to be a bug in 2019 in which the VM might not start automatically.

AutomaticStopAction has these options (use [Tab] to cycle through):

  • Save: place the VM into a saved state
  • ShutDown: via the Hyper-V integration services/components, instruct the guest OS to shut down. If it does not respond or complete within the timeout period, force the VM off.
  • TurnOff: Hyper-V halts the virtual machine immediately (essentially like pulling the power on a physical system)

If you do not know what to do, take the safe route of Save. Hyper-V will wait for saves to complete.

Determine Checkpoint Behavior

By default, Windows 10 will take a checkpoint every time you turn on a virtual machine. That essentially gives you an Oops! button. Windows Server has that option, but leaves it off by default. Both Windows and Windows Server use the so-called “Production” checkpoint and fall back to “Standard” checkpoints. You can override all this behavior.

Applicable parameters:

  • CheckpointType: indicate which type of checkpoints to create. Use [Tab] to cycle through the possible values:
    • Disabled: the VM cannot have checkpoints. Does not impact backup’s use of checkpoints.
    • Production: uses VSS in the guest to signal VSS-aware applications to flush to disk, then takes a checkpoint of only the VM’s configuration and disks. Active threads and memory contents are not protected. If VSS in the guest does not respond, falls back to a “Standard” checkpoint.
    • ProductionOnly: same as Production, but fails the checkpoint operation instead of falling back to “Standard”
    • Standard: checkpoints the entire VM, including active threads and memory. Unlike a Production checkpoint, applications inside a VM have no way to know that a checkpoint operation took place.
  • SnaphotFileLocation: specifies the location for the configuration files of a virtual machine’s future checkpoints. Does not impact existing checkpoints. Does not affect virtual hard disk files (AVHD/X files are always creating alongside the parent).
  • AutomaticCheckpointsEnabled: Controls whether or not Hyper-V makes a checkpoint at each VM start. $true to enable, $false to disable.

Example:

Honestly, I dislike the names “Production” and “Standard”. I outright object to the way that Hyper-V Manager and Failover Cluster Manager use the term “application-consistent” to describe them. You can read my article about the two types to help you decide what to do.

Control the Automatic Response to Disconnected Storage

In earlier versions of Hyper-V, losing connection to storage meant disaster for the VMs. Hyper-V would wait out the host’s timeout value (sometimes), then kill the VMs. Now, it can pause the virtual machine’s CPU, memory and I/O, then wait a while for storage to reconnect.

The value of AutomaticCriticalErrorActionTimeout is expressed in minutes. By default, Hyper-V will wait 30 minutes.

Alternatively, you can set AutomaticCriticalErrorAction to None and Hyper-V will kill the VM immediately, as it did in previous versions.

Attach Human-Readable Notes to a Virtual Machine

You can create notes for a virtual machine right on its properties.

Jeff Hicks gave this feature a full treatment and extended it.

Advanced VM Creation with PowerShell

To control all of the features of your new VM, you will need to use additional cmdlets. All of the cmdlets demonstrated in this section will follow a VM created with:

Starting from that base allows me to get where I want with the least level of typing and effort.

Prepare the VM to Use Discrete Device Assignment

Hyper-V has some advanced capabilities to pass through host hardware using Discrete Device Assignment (DDA). Set-VM has three parameters that impact DDA:

  • LowMemoryMappedIoSpace
  • HighMemoryMappiedIoSpace
  • GuestControlledCacheTypes

These have little purpose outside of DDA. Didier Van Hoye wrote a great article on DDA that includes practical uses for these parameters.

Specify Processor Settings on a New VM

All of the ways to create a VM result in a single vCPU with default settings. You can make some changes in the GUI, but only PowerShell reaches everything. Use the Set-VMProcessor cmdlet.

Changing a VM’s Virtual CPU Count in Hyper-V

I always use at least 2 vCPU because it allows me to leverage SMT and Windows versions past XP/2003 just seem to respond better. I do not use more than two without a demonstrated need or when I have an under-subscribed host. We have an article that dives much deeper into virtual CPUs on Hyper-V.

Give our new VM a second vCPU:

You cannot change the virtual processor count on a running, saved, or paused VM.

Note: You can also change the vCPU count with Set-VM, shown earlier.

Set Hard Boundaries on a VM’s Access to CPU Resources

To set hard boundaries on the minimum and maximum percentage of host CPU resources the virtual machine can access, use the Reserve and Maximum parameters, respectively. These specify the total percentage of host processor resources to be used which depends on the number of vCPUs assigned. Calculate actual resource reservations/limits like this:

Parameter Value / Number of Host Logical Processors * Number of Assigned Virtual CPUs = Actual Value

So, a VM with 4 vCPUs set with a Reserve value of 25 on a host with 24 logical processors will lock about 4% of the host’s total CPU resources for itself. A VM with 6 vCPUs and a Limit of 75 on a host with 16 logical processors will use no more than about 28% of total processing power. Refer to the previously-linked article for an explanation of these settings.

To set all of these values:

You do not need to specify any of these values. New-VM and all the GUI tools create a VM with a value of 1 for Count, a value of 0 for Reserve and a value of 100 for Maximum. If you do not specify one of these parameters for Set-VMProcessor, it leaves the value alone. So, you can set the processor Count in one iteration, then modify the Reserve at a later time without disturbing the Count, and then the Maximum at some other point without impacting either of the previous settings.

You can change these settings on a VM while it is off, on, or paused, but not while saved.

Prioritize a VM’s Access to CPU Resources

Instead of hard limits, you can prioritize a VM’s CPU access with Set-VMProcessor’s RelativeWeight parameter. As indicated, the settings is relative. Every VM has this setting. If every VM has the same priority value, then no one has priority. VM’s begin life with a default processor weight of 100. The host’s scheduler gives preference to VMs with a higher processor weight.

To set the VM’s vCPU count and relative processor weight:

You do not need to specify both values together; I only included the Count to show you how to modify both at once on a new VM. You can also include the Reserve and Maximum settings if you like.

Enable Auto-Throttle on a VM’s CPU Access

Tinkering with limits and reservations and weights can consume a great deal of administrative effort, especially when you only want to ensure that no VM runs off with your CPU and drags the whole thing down for everyone. Your first, best control on that is the number of vCPU assigned to a VM. But, when you start to work with high densities, that approach does not solve much. So, Microsoft designed Host Resource Protection. This feature does not look at raw CPU utilization so much as it monitors certain activities. If it deems them excessive, it enacts throttling. You get this feature with a single switch:

Microsoft does not fully document what this controls. You will need to test it in your environment to determine its benefits.

You can use the EnableHostResourceProtection parameter by itself or with any of the others.

Set VM Processor Compatibility

Hyper-V uses a CPU model that very nearly exposes the logical processor to VMs as-is. That means that a VM can access all of the advanced instruction sets implemented by a processor. However, Microsoft also designed VMs to move between hosts. Not all CPUs use the same instruction set. So, Microsoft implements a setting that hides all instruction sets except those shared by every supported CPU from a manufacturer. If you plan to Live Migrate a VM between hosts with different CPUs from the same manufacturer, use this cmdlet:

Employ a related parameter if you need to run unsupported versions of Windows (like NT 4.0 or 2000):

This one time, I did not override Count. Older operating systems did not have the best support for multi-processing, and a lot of applications from that era perform worse with multiple processors.

You can specify $false to disable these features. You can only change them while the VM is turned off. As with the preceding demonstrations, you can use these parameters in any combination with the others, or by themselves.

Change a VM’s NUMA Processor Settings

I have not written much about NUMA. Even the poorest NUMA configuration would not hurt more than a few Hyper-V administrators. If you don’t know what NUMA is, don’t worry about it. I am writing these instructions for people that know what NUMA is, need it, and just want to know how to use PowerShell to configure it for a Hyper-V VM.

Set-VMProcessor provides two of the three available NUMA settings. We will revisit the other one in the Set-VMMemory section below. Use Set-VMProcessor to specify the maximum number of virtual CPUs per NUMA node or the maximum number of virtual NUMA nodes this VM sees per socket.

As before, you can use any combination of these parameters with each other and the previously-shown parameters. Unlike before, mistakes here can make things worse without making anything better.

Enable Hyper-V Nested Virtualization

Want to run Hyper-V on Hyper-V? No problem (anymore). Run this after you make your new VM:

Note: Enabling virtualization extensions silently disable Dynamic Memory. Only Startup memory will apply.

I have not tested this setting with other hypervisors. It does pass the enabled virtualization features of your CPU down to the guest, so it might enable others. I also did not test this parameter with any parameter other than Count.

Change Memory Settings on a New VM

New-VM always leaves you with static memory. If you don’t provide a MemoryStartupBytes value, it will use a default of one gigabyte. The GUI wizards can enable Dynamic Memory, but will only set the Startup value. For all other memory settings, you must access the VM’s property sheets or turn to PowerShell. We will make these changes with Set-VMMemory.

Note: You can also change several memory values with Set-VM, shown earlier.

Setting Memory Quantities on a VM

A virtual machine’s memory quantities appear on three parameters:

  • Startup: How much memory the virtual machine will have at boot time. If the VM does not utilize Dynamic Memory, this value persists throughout the VM’s runtime
  • MinimumBytes: The minimum amount of memory that Dynamic Memory can assign to the virtual machine
  • MaximumBytes: The maximum amount of memory that Dynamic Memory can assign to the virtual machine

These values exist on all VMs. Hyper-V only references the latter two if you configure the VM to use Dynamic Memory.

This cmdlet sets the VM to use two gigabytes of memory at the start. It does not impact Dynamic Memory in any way; it leaves all of those settings alone. You can change this value at any time, although some guest operating systems will not reflect the change.

We will include the other two settings in the upcoming Dynamic Memory sections.

Enable Dynamic Memory on a VM

Control whether or not a VM uses Dynamic Memory with the DynamicMemoryEnabled parameter.

You can disable it with $false. The above usage does not modify any of the memory quantities. A new VM defaults to 512MB minimum and 1TB maximum.

You can only make this change while the VM is off.

You can also control the Buffer percentage that Dynamic Memory uses for this VM. The “buffer” refers to a behind-the-scenes memory reservation for memory expansion. Hyper-V sets aside a percentage of the amount of memory currently assigned to the VM for possible expansion. You control that percentage with this parameter.

So, if Hyper-V assigns 2108 megabytes to this VM, it will also have up to 210.8 megabytes of buffered memory. Buffer only sets a maximum; Hyper-V will use less in demanding conditions or if the set size would exceed the maximum assigned value. Hyper-V ignores the Buffer setting when you disable Dynamic Memory on a VM. You can change the buffer size on a running VM.

Dynamic Memory Setting Demonstrations

Let’s combine the above settings into a few demonstrations.

Control a VM’s Memory Allocation Priority

If VMs have more total assigned memory than the Hyper-V host can accommodate, it will boot them by Priority order (higher first). Also, if Dynamic Memory has to choose between VMs, it will work from Priority.

Valid values range from 0 to 100. New VMs default to 50. You can use Priority with any other valid combination of Set-VMMemory. You can change Priority at any time.

Note: The GUI tools call this property Memory weight and show its value as a 9-point slider from Low (0) to High (100).

Change a VM’s NUMA Memory Settings

We covered the processor-related NUMA settings above. Use Set-VMMemory to control the amount of memory in the virtual NUMA nodes presented to this VM:

As with the processor NUMA settings, I only included this to show you how. If you do not understand NUMA and know exactly why you would make this change, do not touch it.

Attach Virtual Disks and CD/DVD Drives to a Virtual Machine

You could use these cmdlets instead of the features of New-VM to attach drives. You can also use them to augment New-VM. Due to some complexities, I prefer the latter.

A Note on Virtual Machine Drive Controllers

On a physical computer, you have to use the physical drive controllers as you find them. If you run out of disk locations, you have to add physical controllers. With Hyper-V, you do not directly manage the controllers. Simply instruct the related cmdlets to attach the drive to a specific controller number and location. As long as the VM does not already have a drive in that location, it will use it.

On a Generation 1 virtual machine, you have two emulated Enhanced Integrated Drive Electronics (EIDE, or just IDE) controllers, numbered 0 and 1. Each has location 0 and location 1 available. That allows a total of four available IDE slots. When you set a Generation 1 VM to boot to IDE or VHD, it will always start with IDE controller 0, position 0. If you set it to boot to CD, it will walk down through 0:0, 0:1, 1:0, and 1:1 to find the first CD drive.

Both Generation types allow up to four synthetic SCSI controllers, numbered 0 through 4. Each controller can have up to 64 locations, numbered 0 through 63.

Unlike a physical system, you will not gain benefits from balancing drives across controllers.

Create a Virtual Hard Disk File to Attach

You can’t attach a disk file that you don’t have. You must know what you will call it, where you want to put it, and how big to make it.

By default, New-VHD creates a dynamically-expanding hard disk. For the handful of cases where fixed makes more sense, override with the Fixed parameter:

By default a dynamically-expanding VHDX uses a 32 megabyte block size. For some file systems, like ext4, that can cause major expansion percentages over very tiny amounts of utilized space. Override the block size to a value as low as 1 megabyte:

You can also use LogicalSectorSizeBytes and PhysicalSectorSizeBytes to override defaults. Hyper-V will detect the underlying physical storage characteristics and choose accordingly, so do not override these values unless you intend to migrate the disk to a system with different values:

Create a Virtual Hard Disk from a Physical Disk

You can instruct Hyper-V to encapsulate the entirety of a physical disk inside a VHDX. First, use Get-Disk to find the disk number. Then use New-VHD to transfer its contents into a VHD:

You can combine this usage with Fixed or BlockSizeBytes (not both). The new VHDX will have a maximum size that matches the source disk.

Create a Child Virtual Hard Disk

In some cases, you might wish to use a differencing disk with a VM, perhaps as its primary disk. This usage allows the VM to operate normally, but prevent it from making changes to the base VHDX file.

You can also specify the Differencing parameter, if you like.

Note: Any change to the base virtual hard disk invalidates all of its children.

Check a VM for Available Virtual Hard Disk and CD/DVD Locations

You do not need to decide in advance where to connect a disk. However, you sometimes want to have precise control. Before using any of the attach cmdlets, consider verifying that it has not already filled the intended location. Get-VMHardDiskDrive and Get-VMDvdDrive will show current attachments.

Attach a Virtual Hard Disk File to a Virtual Machine

You can add a disk very easily with Add-VMHardDiskDrive:

Hyper-V will attach it to the next available location.

You can override to a particular location:

Technically, you can skip the ControllerType parameter; Generation 1 assumes IDE and Generation 2 has no other option.

If you want to attach a disk to another SCSI controller, but it does not have another, then add it first:

Notice that I did not specify a location on Add-VMHardDiskDrive. If you specify a controller but no location, it just uses the next available.

Attach a Virtual DVD Drive to a Virtual Machine

Take special note: this cmdlet applies to a virtual drive, not a virtual disk. Basically, it creates a place to put a CD/DVD image, but does not necessarily involve a disk image. It can do both, as you’ll see.

Add-VMDvdDrive uses all the same parameters as Add-VMHardDiskDrive above. If you do not specify the Path parameter, then the drive remains empty. If you do specify Path, it mounts the image immediately:

All the notes from the beginning about permissions and delegation apply here.

If you have a DVD drive already and just want to change its contents, use Set-VMDvdDrive:

If you have more than one CD/DVD attached, you can use the ControllerTypeControllerNumber, and ControllerLocation parameters to specify.

If you want to empty the drive:

Remove-VMDvdDrive completely removes the drive from the system.

Work with a New Virtual Machine’s Network Adapters

Every usage of New-VM should result in a virtual machine with at least one virtual network adapter. By default, it will not attach it to a virtual switch. You might need to modify a VLAN. If desired, you can change the name of the adapter. You can also add more adapters, if you want.

Attach the Virtual Adapter to a Virtual Switch

You can connect every adapter on a VM to the same switch:

If you want to specify the adapter, you have to work harder. I wrote up a more thorough guide on networking that includes that, and other advanced topics.

Connect the Virtual Adapter to a VLAN

All of the default vNIC creation processes leave the adapter as untagged. To specify a VLAN, use Set-VMNetworkAdapterVLAN:

If you need help selecting a vNIC for this operation, use my complete guide for details. It does not have a great deal of information on other ways to use this cmdlet, such as for trunks, so refer to the official documentation.

Rename the Virtual Adapter

You could differentiate adapters for the previous cmdlets by giving adapters unique names. Otherwise, Hyper-V calls them all “Network Adapter”.

Like the preceding cmdlets, this usage will rename all vNICs on the VM. Use caution. But, if you do this on a system with only one adapter, then add another, you can filter against adapters not name “Adapter 1”, then later use the VMNetworkAdapterName parameter.

Add Another Virtual Adapter

You can use Add-VMNetworkAdapter to add further adapters to the VM:

Even better, you can name it right away:

Don’t forget to connect your new adapter to a virtual switch (you can still use Connect-VMNetworkAdapter, of course):

Add-VMNetworkAdapter has several additional parameters for adapter creation. Set-VMNetworkAdapter has a superset, show I will show them in its context. However, you might find it convenient to use StaticMacAddress when creating the adapter.

Set the MAC Address of a Virtual Adapter

You can set the MAC address to whatever you like:

If you need to override the MAC for spoofing (as in, for a software load-balancer):

Other Virtual Network Adapter Settings

Virtual network adapters have a dizzying array of options. Check the official documentation or Get-Help Set-VMNetworkAdapter to learn about them.

Work with a New Virtual Machine’s Integration Services Settings

None of the VM creation techniques allow you to make changes to the Hyper-V integration services. Few VMs ever need such a change, so including them would amount to a nuisance for most of us. We do sometimes need to change these items, perhaps to disable time synchronization for virtualized domain controllers or to block attempts to signal VSS in Linux guests.

We do not use “Set” cmdlets to control integration services. We have Get-, Enable-, and Disable- for integration services. Every new VM enables all services except “Guest Services”. Ideally, the cmdlets would all have pre-set options for the integration services. Unfortunately, we have to either type them out or pipe them in from Get-VMIntegrationService. You can use it to get a list of the available services. You can then use the selection capabilities of the console to copy and paste the item that you need (draw over the letters to copy, then right-click to paste). You can also use a filter (Where-Object) to pick the one that you need. For now, we will see the simplest choices.

To disable the time synchronization service for a virtual machine:

To enable guest services for a virtual machine:

Most of the integration service names contain spaces. Don’t forget to use single or double quotes around such a name.

Put it All Together: Use PowerShell to Make the Perfect VM

A very common concern: “How can I remember all of this?” Well, you can’t remember it all. I have used PowerShell to control VMs since the unofficial module for 2008. I don’t remember everything. But, you don’t need to try. In the general sense, you always have Get-Help and Get-Command -Module Hyper-V. But, even better, you probably won’t use the full range of capability. Most of us create VMs with a narrow range of variance. I will give you two general tips for making the custom VM process easier.

Use a Text Tool to Save Creation Components

In introductory, training, and tutorial materials, we often make a strong distinction between interactive PowerShell and scripted PowerShell. If you remember what you want, you can type it right in. If you make enough VMs to justify it, you can have a more thorough script that you guide by parameter. But, you can combine the two for a nice middle ground.

First, pick a tool that you like. Visual Studio Code has a lot of features to support PowerShell. Notepad++ provides a fast and convenient scratch location to copy and paste script pieces.

This tip has one central piece: as you come up with cmdlet configurations that you will, or even might, use again, save them. You don’t have to build everything into a full script. Sometimes, you need a toolbox with a handful of single-purpose snippets. More than once in my career, I’ve come up with a clever solution to solve a problem at hand. Later, I tried to recall it from memory, and couldn’t. Save those little things — you never know when you’ll need them.

Use PowerShell’s Pipeline and Variables with Your Components

In all the cmdlets that I showed you above, I spelled out the virtual machine’s name. You could do a lot of text replacement each time you wanted to use them. But, you have a better way. If you’ve run New-VM lately, you probably noticed that it emitted something to the screen:

customize vms using powershell

Instead of just letting all that go to the screen, you can capture it or pass it along to another cmdlet.

Pipeline Demo

Use the pipe character  | to quickly ship output from one cmdlet to another. It works best to make relatively few and simple changes.

The above has three separate commands that all chain from the first. You can copy this into your text manipulation tool and save it. You can then use it as a base pattern. You change the name of the VM and its VHDX in the text tool and then you can create a VM with these settings anytime you like. No need to step through a wizard and then flip a lot of switches after.

Warning: Some people in the PowerShell community develop what I consider an unhealthy obsession with pipelining, or “one liners”. You should know how the pipeline works, especially the movement of objects. But, extensive pipelining becomes impractical quite quickly. Somewhere along the way, it amounts to little more than showing off. Worse, because not every cmdlet outputs the same object, you quickly have to learn a lot of tricks that do nothing except keep the pipeline going. Most egregiously, “one liners” impose severe challenges to legibility and maintainability with no balancing benefit. Use the pipeline to the extent that it makes things easier, but no further.

Variables Demos

You can capture the output of any cmdlet into a variable, then use that output in succeeding lines. It requires more typing than the pipeline, but trades flexibility.

Each of the cmdlets in the above listing has a PassThru parameter, but, except for New-VM, none emits an object that any of the others can use. This script takes much more typing than the pipeline demo, but it does more and breaks each activity out into a single, easy comprehensible line. As with the pipeline version, you can set up each line to follow the pattern that you use most, then change only the name in the first line to suit each new VM. Notice that it automatically gives the VHDX a name that matches the VM, something that we couldn’t do in the pipeline version.

Combining Pipelines and Variables

You can use variables and pipelines together to maximize their capabilities.

With this one, you can implement your unique pattern but place all the changeable items right in the beginning. This sample only sets the VM’s name. If you want to make other pieces easily changeable, just break them out onto separate lines.

Making Your Own Processes

If you will make a single configuration of VM repeatedly, you should create a saved script or an advanced function in your profile. It should have at least one parameter to specify the individual VM name.

But, even though most people won’t create VMs with a particular wide variance of settings, neither will many people create VMs with an overly tight build. Using a script with lots of parameters presents its own challenges. So, instead of a straight-through script, make a collection of copy/pasteable components.

Use something like the following:

Trying to run all of that as-is would cause some problems. Instead, copy out the chunks that you need and paste them as necessary. Add in whatever parts suit your needs.

Be sure to let us know how you super-charged your VM creation routines!


Go to Original Article
Author: Eric Siron

Benefits of virtualization highlighted in top 10 stories of 2019

When an organization decides to pursue a virtual desktop solution, a host of questions awaits it.

Our most popular virtual desktop articles this year highlight that fact and show how companies are still trying to get a handle on the virtual desktop infrastructure terrain. The stories explain the benefits of virtualization and provide comparisons between different enterprise options.

A countdown of our most-read articles, determined by page views, follows.

  1. Five burning questions about remote desktop USB redirection

Virtual desktops strive to mimic the traditional PC experience, but using local USB devices can create a sticking point. Remote desktop USB redirection enables users to attach their devices to their local desktop and have it function normally. In 2016, we explored options for redirection, explained how the technology worked and touched upon problem areas such as how scanners are infamously problematic with redirection.

  1. Tips for VDI user profile management

Another key factor for virtualizing the local desktop experience includes managing things like a user’s browser bookmarks, desktop background and settings. That was the subject of this FAQ from 2013 and our ninth most popular story for 2019. The article outlines options for managing virtual desktop user profiles, from implementing identical profiles for everyone to ensuring that settings once saved locally carry over to the virtual workspace.

  1. VDI hardware comparison: Thin vs. thick vs. zero clients

The push toward centralizing computing services has created a market for thin and zero clients, simple and low-cost computing devices reliant on servers. In implementing VDI, IT professionals should consider the right option for their organization. Thick clients, the traditional PC, provide proven functionality, but they also sidestep some of the biggest benefits of virtualization such as lower cost, energy efficiency and increased security. Thin clients provide a mix of features, and their simplicity brings VDI’s assets, such as centralized management and ease of local deployment, to bear. Zero clients require even less configuration, as they have nothing stored locally, but they tend to be proprietary.

  1. How to troubleshoot remote and virtual desktop connection issues

Connection issues can disrupt employee workflow, so avoiding and resolving them is paramount for desktop administrators. Once the local hardware has been ruled out, there are a set of common issues — exceeded capacity, firewalls, SSL certificates and network-level authentication — that IT professionals can consider when solving the puzzle.

  1. Comparing converged vs. hyper-converged infrastructure

What’s the difference between converged infrastructure (CI) and hyper-converged infrastructure (HCI)? This 2015 missive took on that question in our sixth most popular story for 2019. In short, while CI houses four data center functions — computing, storage, networking and server virtualization — into a single chassis, HCI looks to add even more features through software. HCI’s flexibility and scalability were touted as advantages over the more hardware-focused CI.

  1. Differences between desktop and server virtualization

To help those seeking VDI deployment, this informational piece from 2014 focused on how desktop virtualization differs from server virtualization. Server virtualization partitions one server into many, enabling organizations to accomplish tasks like maintaining databases, sharing files and delivering media. Desktop virtualization, on the other hand, delivers a virtual computer environment to a user. While server virtualization is easier to predict, given its uniform daily functions, a virtual desktop user might call for any number of potential applications or tasks, making the distinction between the two key.

  1. Application virtualization comparison: XenApp vs. ThinApp vs. App-V

This 2013 comparison pitted Citrix, VMware and Microsoft’s virtualization services against each other to determine the best solution for streaming applications. Citrix’s XenApp drew plaudits for the breadth of the applications it supported, but its update schedule provided only a short window to migrate to newer versions. VMware ThinApp’s portability was an asset, as it did not need installed software or device drivers, but some administrators said the service was difficult to deploy and the lack of a centralized management platform made handling applications trickier. Microsoft’s App-V provided access to popular apps like Office, but its agent-based approach limited portability when compared to ThinApp.

  1. VDI shops mull XenDesktop vs. Horizon as competition continues

In summer 2018, we took a snapshot of the desktop virtualization market as power players Citrix and VMware vied for a greater share of users. At the time, Citrix’s product, XenDesktop, was used in 57.7% of on-premises VDI deployments, while VMware’s Horizon accounted for 26.9% of the market. Customers praised VMware’s forward-facing emphasis on cloud, while a focus on security drew others to Citrix. Industry watchers wondered if Citrix would maintain its dominance through XenDesktop 7.0’s end of life that year and if challenger VMware’s vision for the future would pay off.

  1. Compare the top vendors of thin client systems

Vendors vary in the types of thin client devices they offer and the scale they can accommodate. We compared offerings from Advantech, Asus, Centerm Information, Google, Dell, Fujitsu, HP, Igel Technology, LG Electronics, Lenovo, NComputing, Raspberry Pi, Samsung, Siemens and 10ZiG Technology to elucidate the differences between them, and the uses for which they might be best suited.

  1. Understanding nonpersistent vs. persistent VDI

This article from 2013 proved some questions have staying power. Our most popular story this year explained the difference between two types of desktops that can be deployed on VDI. Persistent VDI provides each user his or her own desktop, allowing more flexibility for users to control their workspaces but requiring more storage and heightening complexity. Nonpersistent VDI did not save settings once a user logged out, a boon for security and consistent updates, but less than ideal in providing easy access to needed apps.

Go to Original Article
Author:

Azure Bastion brings convenience, security to VM management

Administrators who want to manage virtual machines securely but want to avoid complicated jump server setup and maintenance have a new option at their disposal.

When you run Windows Server and Linux virtual machines in Azure, you need to configure administrative access. This requires communicating with these VMs from across the internet using Transmission Control Protocol (TCP) port 3389 for Remote Desktop Protocol (RDP), and TCP 22 for Secure Shell (SSH).

You want to avoid the configuration in Figure 1, which exposes your VMs to the internet with an Azure public IP address and invites trouble via port scan attacks. Microsoft publishes its public IPv4 data center ranges, so bad actors know which public IP addresses to check to find vulnerable management ports.

The problem with the network address translation (NAT)/load balancer method is your security team won’t like it. This technique is security by obfuscation, which is to say it does not make things more secure. It’s more of a NAT protocol hack.

port scan attacks
Figure 1. This setup exposes VMs to the internet with an Azure public IP address that makes an organization vulnerable to port scan attacks.

Another remote server management option offers illusion of security  

If you have a dedicated hybrid cloud setup with site-to-site virtual private network or an ExpressRoute circuit, then you can interact with your Azure VMs the same way you would with your on-premises workloads. But not every business has the money and staff to configure a hybrid cloud.

Another option, shown in Figure 2, combines the Azure public load balancer with NAT to route management traffic through the load balancer on nonstandard ports.

NAT rules
Figure 2. Using NAT and Azure load balancer for internet-based administrative VM access.

For instance, you could create separate NAT rules for inbound administrative access to the web tier VMs. If the load balancer public IP is 1.2.3.4, winserv1’s private IP is 192.168.1.10, and winserv2’s private IP is 192.168.1.11, then you could create two NAT rules that look like:

  • Inbound RDP connections to 1.2.3.4 on port TCP 33389 route to TCP 3389 on 192.168.1.10
  • Inbound RDP connections to 1.2.3.4 on port TCP 43389 route to TCP 3389 on 192.168.1.11

The problem with this method is your security team won’t like it. This technique is security by obfuscation that relies on a NAT protocol hack.

Jump servers are safer but have other issues

A third method that is quite common in the industry is to deploy a jump server VM to your target virtual network in Azure as shown in Figure 3.

jump server configuration
Figure 3. This diagram details a conventional jump server configuration for Azure administrative access.

The jump server is nothing more than a specially created VM that is usually exposed to the internet but has its inbound and outbound traffic restricted heavily with network security groups (NSGs). You allow your admins access to the jump server; once they log in, they can jump to any other VMs in the virtual network infrastructure for any management jobs.

Of these choices, the jump server is safest, but how many businesses have the expertise to pull this off securely? The team would need intermediate- to advanced-level skill in TCP/IP internetworking, NSG traffic rules, public and private IP addresses and Remote Desktop Services (RDS) Gateway to support multiple simultaneous connections.

For organizations that don’t have these skills, Microsoft now offers Azure Bastion.

What Azure Bastion does

Azure Bastion is a managed network virtual appliance that simplifies jump server deployment in your virtual networks.

Azure Bastion is a managed network virtual appliance that simplifies jump server deployment in your virtual networks. You drop an Azure Bastion host into its own subnet, perform some NSG configuration, and you are done.

Organizations that use Azure Bastion get the following benefits:

  • No more public IP addresses for VMs in Azure.
  • RDP/SSH firewall traversal. Azure Bastion tunnels the RDP and SSH traffic over a standard, non-VPN Transport Layer Security/Secure Sockets Layer connection.
  • Protection against port scan attacks on VMs.

How to set up Azure Bastion

Azure Bastion requires a virtual network in the same region. As of publication, Microsoft offers Azure Bastion in the following regions: Australia East, East U.S., Japan East, South Central U.S., West Europe and West U.S.

You also need an empty subnet named AzureBastionSubnet. Do not enable service endpoints, route tables or delegations on this special subnet. Further in this tutorial you can define or edit an NSG on each VM-associated subnet to customize traffic flow.

Because the Azure Bastion supports multiple simultaneous connections, size the AzureBastionSubnet subnet with at least a /27 IPv4 address space. One possible reason for this network address size is to give Azure Bastion room to auto scale in a method similar to the one used with autoscaling in Azure Application Gateway.

Next, browse to the Azure Bastion configuration screen and click Add to start the deployment.

Azure Bastion deployment setup
Figure 4: Deploying an Azure Bastion resource.

As you can see in Figure 4, the deployment process is straightforward if the virtual network and AzureBastionSubnet subnet are in place.

According to Microsoft, Azure Bastion will support client RDP and SSH clients in time, but for now you establish your management connection via the Connect experience in Azure portal. Navigate to a VM’s Overview blade, click Connect, and switch to the Bastion tab as shown Figure 5.

Azure Bastion setup
Figure 5. The Azure portal includes an Azure Bastion connection workflow.

On the Bastion tab, provide an administrator username and password, and then click Connect one more time. Your administrative RDP or SSH session opens in another browser tab, shown in Figure 6.

Windows Server management
Figure 6. Manage a Windows Server VM in Azure with Azure Bastion using an Azure portal-based RDP session.

You can share clipboard data between the Azure Bastion-hosted connection and your local system. Close the browser tab to end your administrative session.

Customize Azure Bastion

To configure Azure Bastion for your organization, create or customize an existing NSG to control traffic between the Azure Bastion subnet and your VM subnets.

Secure access to VMs with Azure Bastion.

Microsoft provides default NSG rules to allow traffic among subnets within your virtual network. For a more efficient and powerful option, upgrade your Azure Security Center license to Standard and onboard your VMs to just-in-time (JIT) VM access, which uses dynamic NSG rules to lock down VM management ports unless an administrator explicitly requests a connection.

You can combine JIT VM access with Azure Bastion, which results in this VM connection workflow:

  • Request access to the VM.
  • Upon approval, proceed to Azure Bastion to make the connection.

Azure Bastion needs some fine-tuning

Azure Bastion has a fixed hourly cost; Microsoft also charges for outbound data transfer after 5 GB.

Azure Bastion is an excellent way to secure administrative access to Azure VMs, but there are a few deal-breakers that Microsoft needs to address:

  1. You need to deploy an Azure Bastion host for each virtual network in your environments. If you have three virtual networks, then you need three Azure Bastion hosts, which can get expensive. Microsoft says virtual network peering support is on the product roadmap. Once Microsoft implements this feature, you can deploy a single Bastion host in your hub virtual network to manage VMs in peered spoke virtual networks.
  2. There is no support for PowerShell remoting ports, but Microsoft does support RDP, which goes against its refrain to avoid the GUI to manage servers.
  3. Microsoft’s documentation does not give enough architectural details to help administrators determine the capabilities of Azure Bastion, such as whether an existing RDP session Group Policy can be combined with Azure Bastion.

Go to Original Article
Author: