Tag Archives: virtualization

Oracle andVMware forge new IaaS cloud partnership

SAN FRANCISCO — VMware’s virtualization stack will be made available on Oracle’s IaaS, in a partnership that underscores changing currents in the public cloud market and represents a sharp strategic shift for Oracle.

Under the pact, enterprises will be able to deploy certified VMware software on Oracle Cloud Infrastructure (OCI), the company’s second-generation IaaS. Oracle is now a member of the VMware Cloud Provider Program and will sell VMware’s Cloud Foundation stack for software-defined data centers, the companies said on the opening day of Oracle’s OpenWorld conference.

Oracle plans to give customers full root access to physical servers on OCI, and they can use VMware’s vCenter product to manage on-premises and OCI-based environments through a single tool.

“The VMware you’re running on-premises, you can lift and shift it to the Oracle Cloud,” executive chairman and CTO Larry Ellison said during a keynote. “You really control version management operations, upgrade time of the VMware stack, making it easy for you to migrate — if that’s what you want to do — into the cloud with virtually no change.”

The companies have also reached a mutual agreement around support, which Oracle characterized with the following statement: “[C]ustomers will have access to Oracle technical support for Oracle products running on VMware environments. … Oracle has agreed to support joint customers with active support contracts running supported versions of Oracle products in Oracle supported computing environments.”

It’s worth noting the careful language of that statement, given Oracle and VMware’s history. While Oracle has become more open to supporting its products on VMware environments, it has yet to certify any for VMware.

Moreover, many customers have found Oracle’s licensing policy for deploying its products on VMware devilishly complex. In fact, a cottage industry has emerged around advisory services meant to help customers keep compliant with Oracle and VMware.

Nothing has changed with regard to Oracle’s existing processor license policy, said Vinay Kumar, vice president of product management for OCI. But the VMware software to be made available on OCI will be through bundled, Oracle-sold SKUs that encompass software and physical infrastructure. Initially, one SKU based on X7 bare-metal instances will be available, according to Kumar.

Oracle and VMware have been working on the partnership for the past nine months, he added. The first SKU is expected to be available within the next six months. Kumar declined to provide details on pricing.

Oracle, VMware relations warm in cloudier days

“It seems like there is a thaw between Oracle and VMware,” said Gary Chen, an analyst at IDC. The companies have a huge overlap in terms of customers who use their software in tandem, and want more deployment options, he added. “Oracle customers are stuck on Oracle,” he said. “They have to make Oracle work in the cloud.”

Gary Chen, Analyst, IDCGary Chen

Meanwhile, VMware has already struck cloud-related partnerships with AWS, IBM, Microsoft and Google, leaving Oracle little choice but to follow. Oracle has also largely ceded the general-purpose IaaS market to those competitors, and has positioned OCI for more specialized tasks as well as core enterprise application workloads, which often run on VMware today.

Massive amounts of on-premises enterprise workloads run on VMware, but as companies look to port them to the cloud, they want to do it in the fastest, easiest way possible, said Holger Mueller, an analyst at Constellation Research in Cupertino, Calif.

The biggest cost of lift-and-shift deployments to the cloud involves revalidation and testing in the new environment, Mueller added.

It seems like there is a thaw between Oracle and VMware.
Gary ChenAnalyst, IDC

But at this point, many enterprises have automated test scripts in place, or even feel comfortable not retesting VMware workloads, according to Mueller. “So the leap of faith involved with deploying a VMware VM on a server in the corporate data center or in a public cloud IaaS is the same,” he said.

In the near term, most customers of the new VMware-OCI service will move Oracle database workloads over, but it will be Oracle’s job to convince them OCI is a good fit for other VMware workloads, Mueller added.

Go to Original Article
Author:

DataCore adds new HCI, analytics, subscription price options

Storage virtualization pioneer DataCore Software revamped its strategy with a new hyper-converged infrastructure appliance, cloud-based predictive analytics service and subscription-based licensing option.

DataCore launched the new offerings this week as part of an expansive DataCore One software-defined storage (SDS) vision that spans primary, secondary, backup and archival storage across data center, cloud and edge sites.

For the last two decades, customers have largely relied on authorized partners and OEMs, such as Lenovo and Western Digital, to buy the hardware to run their DataCore storage software. But next Monday, they’ll find new 1U and 2U DataCore-branded HCI-Flex appliance options that bundle DataCore software and VMware vSphere or Microsoft Hyper-V virtualization technology on Dell EMC hardware. Pricing starts at $21,494 for a 1U box, with 3 TB of usable SSD capacity.

The HCI-Flex appliance reflects “the new thinking of the new DataCore,” said Gerardo Dada, who joined the company last year as chief marketing officer.

DataCore software can pool and manage internal storage, as well as external storage systems from other manufacturers. Standard features include parallel I/O to accelerate performance, automated data tiering, synchronous and asynchronous replication, and thin provisioning.

New DataCore SDS brand

In April 2018, DataCore unified and rebranded its flagship SANsymphony software-defined storage and Hyperconverged Virtual SAN software as DataCore SDS. Although the company’s website continues to feature the original product names, DataCore will gradually transition to the new name, said Augie Gonzalez, director of product marketing at DataCore, based in Fort Lauderdale, Fla.

With the product rebranding, DataCore also switched to simpler per-terabyte pricing instead of charging customers based on a-la-carte features, nodes with capacity limits and separate expansion capacity. With this week’s strategic relaunch, DataCore is adding the option of subscription-based pricing.

Just as DataCore faced competitive pressure to add predictive analytics, the company also needed to provide a subscription option, because many other vendors offer it, said Randy Kerns, a senior strategist at Evaluator Group, based in Boulder, Colo. Kerns said consumption-based pricing has become a requirement for storage vendors competing against the public cloud.

“And it’s good for customers. It certainly is a rescue, if you will, for an IT operation where capital is difficult to come by,” Kerns said, noting that capital expense approvals are becoming a bigger issue at many organizations. He added that human nature also comes into play. “If it’s easier for them to get the approvals with an operational expense than having to go through a large justification process, they’ll go with the path of least resistance,” he said.

DataCore SDS
DataCore software-defined storage dashboard

DataCore Insight Services

DataCore SDS subscribers will gain access to the new Microsoft Azure-hosted DataCore Insight Services. DIS uses telemetry-based data the vendor has collected from thousands of SANsymphony installations to detect problems, determine best-practice recommendations and plan capacity. The vendor claimed it has more than 10,000 customers.

Like many storage vendors, DataCore will use machine learning and artificial intelligence to analyze the data and help customers to proactively correct issues before they happen. Subscribers will be able to access the information through a cloud-based user interface that is paired with a local web-based DataCore SDS management console to provide resolution steps, according to Steven Hunt, a director of product management at the company.

DataCore HCI-Flex appliance
New DataCore HCI-Flex appliance model on Dell hardware

DataCore customers with perpetual licenses will not have access to DIS. But, for a limited time, the vendor plans to offer a program for them to activate new subscription licenses. Gonzalez said DataCore would apply the annual maintenance and support fees on their perpetual licenses to the corresponding DataCore SDS subscription, so there would be no additional cost. He said the program will run at least through the end of 2019.

Shifting to subscription-based pricing to gain access to DIS could cost a customer more money than perpetual licenses in the long run.

“But this is a service that is cloud-hosted, so it’s difficult from a business perspective to offer it to someone who has a perpetual license,” Dada said.

Johnathan Kendrick, director of business development at DataCore channel partner Universal Systems, said his customers who were briefed on DIS have asked what they need to do to access the services. He said he expects even current customers will want to move to a subscription model to get DIS.

“If you’re an enterprise organization and your data is important, going down for any amount of time will cost your company a lot of money. To be able to see [potential issues] before they happen and have a chance to fix that is a big deal,” he said.

Customers have the option of three DataCore SDS editions: enterprise (EN) for the highest performance and richest feature set, standard (ST) for midrange deployments, and large-scale (LS) for secondary “cheap and deep” storage, Gonzalez said.

Price comparison

Pricing is $416 per terabyte for a one-year subscription of the ST option, with support and software updates. The cost for a perpetual ST license is $833 per terabyte, inclusive of one year of support and software updates. The subsequent annual support and maintenance fees are 20%, or $166 per year, Gonzalez said. He added that loyalty discounts are available.

The new PSP 9 DataCore SDS update that will become generally available in mid-July includes new features, such as AES 256-bit data-at-rest encryption that can be used across pools of storage arrays, support for VMware’s Virtual Volumes 2.0 technology and UI improvements.

DataCore plans another 2019 product update that will include enhanced file access and object storage options, Gonzalez said.

This week’s DataCore One strategic launch comes 15 months after Dave Zabrowski replaced founder George Teixeira as CEO. Teixeira remains with DataCore as chairman.

“They’re serious about pushing toward the future, with the new CEO, new brand, new pricing model and this push to fulfill more of the software-defined stack down the road, adding more long-term archive type storage,” Jeff Kato, a senior analyst at Taneja Group in West Dennis, Mass., said of DataCore. “They could have just hunkered down and stayed where they were at and rested on their installed base. But the fact that they’ve modernized and gone for the future vision means that they want to take a shot at it.

“This was necessary for them,” Kato said. “All the major vendors now have their own software-defined storage stacks, and they have a lot of competition.”

Go to Original Article
Author:

VMware takes NSX security to AWS workloads

VMware has introduced features that improve the use of its NSX network virtualization and security software in private and public clouds.

At VMworld 2018 in Las Vegas, VMware unveiled an NSX instance for AWS Direct Connect and technology to apply NSX security policies on Amazon Web Services workloads. Also, VMware said Arista Networks’ virtual and physical switches would enforce NSX policies — the result of a collaboration between the two vendors.

VMware is applying NSX security policies, including microsegmentation, on AWS workloads by adding support of NSX-T to VMware Cloud on AWS. NSX-T provides networking and security management for containers and non-VMware virtualized environments. VMware Cloud on AWS is a hybrid cloud service that runs the VMware software-defined data center stack on AWS.

The latest AWS feature is in NSX-T Data Center 2.3, which VMware introduced at VMworld. Other features added to the newest version of NSX-T include support for containers and Linux-based workloads running on bare-metal servers. NSX-T uses Open vSwitch to turn a Linux host into an NSX-T transport node and to provide stateful security services.

VMware plans to release NSX-T 2.3 by November.

NSX on AWS Direct Connect

To help companies connect to AWS, VMware introduced integration between NSX and AWS Direct Connect. The combination will provide NSX-powered connectivity between workloads running on VMware Cloud on AWS and those running on a VMware-based private cloud in the data center.

AWS Direct Connect lets companies bypass the public internet and establish a dedicated network connection between a data center and an AWS location. Direct Connect is particularly useful for companies with rules against transferring sensitive data across the public internet.

Finally, VMware introduced interoperability between Arista’s CloudVision and NSX. As a result, companies can have NSX security policies enforced on Arista switches running either virtually in a public cloud or the data center.

Arista CloudVision manages switching fabrics within multiple cloud environments. Last year, the company released a virtualized version of its EOS network operating system for AWS, Google Cloud Platform, Microsoft Azure and Oracle Cloud.

VMware is using its NSX portfolio to connect and secure infrastructure and applications running in the data center, branch office and public cloud. For the branch office, VMware has integrated NSX with the company’s VeloCloud software-defined WAN to provide microsegmentation for applications at the WAN’s edge.

VMware competes in multi-cloud networking with Cisco and Juniper Networks.

Hyper-V Quick Tip: How to Enable Nested Virtualization

Q: How Do I Enable Nested Virtualization for Hyper-V Virtual Machines

A: Pass $true for Set-VMProcessor’s “ExposeVirtualizationExtensions” parameter

In its absolute simplest form:

Set-VMProcessor has several other parameters which you can view in its online help.

As shown above, the first parameter is positional, meaning that it guesses that I supplied a virtual machine’s name because it’s in the first slot and I didn’t tell it otherwise. For interactive work, that’s fine. In scripting, try to always fully-qualify every parameter so that you and other maintainers don’t need to guess:

The Set-VMProcessor cmdlet also accepts pipeline input. Therefore, you can do things like:

Requirements for Nested Virtualization

In order for nested virtualization to work, you must meet all of the following:

  • The Hyper-V host must be at least the Anniversary Edition version of Windows 10, Windows Server 2016, Hyper-V Server 2016, or Windows Server Semi-Annual Channel
  • The Hyper-V host must be using Intel CPUs. AMD is not yet supported
  • A virtual machine must be off to have its processor extensions changed

No configuration changes are necessary for the host.

Microsoft only guarantees that you can run Hyper-V nested within Hyper-V. Other hypervisors may work, but you will not receive support either way. You may have mixed results trying to run different versions of Hyper-V. I am unaware of any support statement on this, but I’ve had problems running mismatched levels of major versions.

Memory Changes for Nested Virtual Machines

Be aware that a virtual machine with virtualization extensions exposed will always use its configured value for Startup memory. You cannot use Dynamic Memory, nor can you change the virtual machine’s fixed memory while it is running.

Remember, as always, I’m here to help, so send me any questions you have on this topic using the question form below and I’ll get back to you as soon as I can.

More Hyper-V Quick Tips from Eric:

Hyper-V Quick Tip: How to Choose a Live Migration Performance Solution

Hyper-V Quick Tip: How Many Cluster Networks Should I Use?

Array bolsters throughput, security in NFV appliance

Array Networks Inc. has introduced an upgrade of its network functions virtualization hardware. New features in the AVX NFV appliance, which provides application delivery, security and other networking operations, include support for 40 GbE interfaces and higher throughput for encrypted traffic.

Array, based in Milpitas, Calif., launched the AVX5800, AVX7800 and AVX9800 appliances this week. Along with support for optional 40 GbE network interface cards (NICs), the latest hardware provides a significant improvement in elliptic curve cryptography (ECC) processing over a Secure Sockets Layer virtual private network (SSL VPN).

The new NFV appliances include Array’s latest software release, AVX 2.7. The upgrade provides better fine-tuning of system resources for virtualized network functions running on the platform. Other improvements include the ability to back up and restore AVX configurations and images via USB and an online image repository for software running on AVX appliances.

Array has also added enhancements for companies using the NFV appliance with OpenStack environments. The company has introduced a hypervisor driver that lets the AVX platform serve as an OpenStack compute node.

The AVX NFV platform, launched in May 2017, comprises a series of virtualized servers for running Array and third-party applications, such as Fortinet’s FortiGate next-generation firewall and Positive Technologies’ PT AF web application firewall.

A10 Harmony Controller Update

A10 has launched an upgrade to its Harmony Controller, an application delivery controller, or ADC, that is also a cloud management, orchestration and analytics engine.

A10, based in San Jose, Calif., released Harmony version 4.1 last week, adding improvements to the product’s ability to configure and manage policies across A10’s line of Thunder security appliances.

New features in Harmony include preloaded Thunder ADC services. Also added to the controller is a self-service app for Thunder SSL inspection, which decrypts traffic, so security devices can analyze it.

AVX9800
Array Networks’ AVX9800 NFV appliance

Other improvements include extending Harmony’s analytics history to 12 months, so network operators and security pros can go further back in time when investigating events.

Harmony is a cloud-optimized ADC that can spin up specific services anywhere in a hybrid cloud environment. The software also incorporates per-application analytics and centrally manages and orchestrates application services.

Aviatrix improves its AWS security

Aviatrix has added to its AVX network security software better control over traffic leaving Amazon Web Services. The enhancements provide customers with stronger protection against internal threats and external attacks.

The new AVX capability announced last week focuses on filtering egress data from an AWS virtual private cloud (VPC). An AWS VPC provides a private cloud computing environment on the infrastructure-as-a-service provider’s platform. The benefit of a VPC is the granular control a company can get over a virtual network service serving sensitive workloads.

AVX for AWS VPCs verifies the traffic destination’s IP address, hostname or website, the vendor, based in Palo Alto, Calif., said. An inline, software-controlled AVX Gateway does the VPC filtering and prevents traffic from going to unauthorized locations.

The Aviatrix platform, which comprises a controller and gateway, operates over a network overlay that spans cloud and data center environments. The new VPC egress security feature is available as part of the platform, which is available only as software.

Companies can deploy the Aviatrix product through the AWS marketplace. Aviatrix also has versions of its technology for Microsoft Azure and Google Cloud.

VDI shops mull XenDesktop vs. Horizon as competition continues

As the desktop virtualization market evolves, the long-standing competition between Citrix and VMware leaves IT pros with a difficult decision when choosing a product.

Citrix XenDesktop and VMware Horizon each hold their own appeal to different customers. Some are drawn to the security features and graphics-related innovations of XenDesktop, while others see Horizon as more cutting edge.

“[Citrix] was the standard for a long time,” said Zeus Kerravala, founder and principal analyst at ZK Research. “Over the last few years, VMware really has put their foot on the pedal as far as innovation goes.”

A closer look at XenDesktop vs. Horizon

XenDesktop — now transitioning to the name Citrix Virtual Desktop — accounts for 57.7% of on-premises VDI deployments compared to Horizon’s 26.9%, according to Login VSI and Frame’s “State of the EUC 2018” report released in May.

VMware is doing what it can to close the gap, however. In 2015, the company introduced Instant Clone, a feature that allows IT to clone a VM while it’s running. VMware also added Blast Extreme, a proprietary remote display protocol, in 2016.

VMware has also improved its standing in the XenDesktop vs. Horizon debate by integrating with F5 Networks to improve network performance in the last couple of years. XenDesktop uses Citrix’s proprietary product, NetScaler, which is not on par with F5’s offering, Kerravala said.

The University of Arkansas chose VMware Horizon over XenDesktop about a year and a half ago because of Horizon’s emphasis on the cloud.

“The Citrix solution felt like it was an evolution of on-prem,” said Jon Kelley, associate director of enterprise innovation at the university. “The VMware technology was more for the cloud-type stuff with disposable infrastructure. We wanted to be a little more forward-thinking and be software-defined.”

Horizon also appeals to customers because of its integration with other VMware products, according to Sheldon D’Paiva, director of product marketing at VMware.

“Workspace One has all the best-of-breed pieces integrated with it,” D’Paiva said. “It brings together VDI with Horizon and identity and access management so you can have single sign-on for all your apps. VMware can provide everything from the storage layer, the hypervisor, the broker … We can provide the entire stack.”

XenDesktop’s innovation has been strong when it comes to security improvements around consolidation and encryption, said Jeff Kater, director of information technology at the Kansas Development Finance Authority, a corporate finance entity that uses XenDesktop. Citrix integrated XenDesktop on top of an open API stack, which enables more secure browsing, among other benefits, Kater said.

“[Citrix’s] API stack on [Bitdefender Hypervisor Introspection] allows me to have file protection baked into the image, but also have memory introspection actually living one layer beneath the hypervisor, and so, virtually, we’re impenetrable,” he said. “I deny all rogue access. I was only able to get that with XenDesktop.”

Security is one of several aspects of Citrix’s virtual app and desktop offering that draw customers, according to Thomas Berger, senior product marketing manager at Citrix.

“It offers the best user experience for any application and user data over any network and on any device,” Berger said. “Its context-aware security is stronger and more flexible. And its specialized built-in support and management tools make management simpler and more efficient and agile.”

Citrix is also moving ahead when it comes to delivering graphics to virtual desktops. The company allows GPU-accelerated VMs to float dynamically among hosts so IT pros don’t have to shut down a VM when it moves to a new host. VMware offers a similar capability in vSphere 6.7 where IT pros can suspend desktop sessions to migrate GPU-accelerated VMs from one server to another.

Ultimately, though, it’s the customer-first approach that attracted Kater to XenDesktop, he said.

“They want to make sure you’re a happy customer,” Kater said. “If the customer has a request, Citrix works overtime trying to fix that and resolve that in future releases.”

Customer support is a strong element of VMware’s offering, as well, Kelley said.

“[VMware was] invested in making sure the vision they sold us was actually what we were going to get,” he said. “They focus on how to make it easier for people to get to the data and the stuff they need to get the work done.”

Is change in the air?

The XenDesktop vs. Horizon battle could shift thanks to a confluence of factors experts said. For starters, last summer, Citrix went through yet another CEO change — its third in five years.

“We do look for strength in the company as a point for how we make some of these decisions,” Kelley said. “[VMware] actually had a fully fleshed out strategy for where the thing was going.”

When you combine the uncertainty with the company [Citrix] with the end of the support for products, fiscally responsible [companies] are going to take a look around and see what else is there.
Zeus Kerravalafounder and principal analyst, ZK Research

In addition, XenDesktop 7.0 reached its end of life on June 30, 2018. When a product reaches the end of its mainstream support, it’s a natural opportunity for customers to consider other options, Kerravala said.

“That opens the door for customers to go,” he said. “When you combine the uncertainty with the company [Citrix] with the end of the support for products, fiscally responsible [companies] are going to take a look around and see what else is there.”

The changes don’t mean that existing customers are running scared, however.

“Everything has fit the bill wonderfully for us,” Kater said. “Citrix is in a good spot. And as long as it continues to innovate, people will take note, and that will smooth things over.”

Still, Kater keeps an eye on how a major change at Citrix would affect his users.

“If something goes end of life, if they sell off a part of their company, I want to make sure that every product they put into production has the ability — with a Citrix tool — to export my images, my products, into a kernel to any other of the major players and [that] they do that,” Kater said.

The Complete Guide to Azure Virtual Machines: Part 1

Azure Virtual Machines make an already hugely flexible technology in virtualization even more adaptable through remote hosting.

Virtual machines are a part of Azure’s Infrastructure as a Service (IaaS) offering that allows you to have the flexibility of virtualization without having to invest in the underlying infrastructure. In simpler words, you are paying Microsoft to run a Virtual Machine of your choosing in their Azure environment while they provide you access to the VM.

One of the biggest misconceptions I see in the workplace is that managing Cloud Infrastructure is the same as or very similar to managing on-premise infrastructure. THIS IS NOT TRUE. Cloud Infrastructure is a whole new ball game. It can be a great tool in our back pockets for certain scenarios but only if used correctly. This blog series will explain how you can determine if a workload is suitable for an Azure VM and how to deploy it properly.

Why Use Azure Virtual Machines Over On-Premise Equipment?

One of the biggest features of the public cloud is its scalability. If you write an application and need to scale up the resources dramatically for a few days, you can create a VM in Azure, install your application, run it in there and turn it off when done. You only pay for what you use. If you haven’t already invested in your own physical environment this is a very attractive alternative. The agility this solution provides software developers is on a whole new level compared to before and enables companies to become more efficient at creating applications, and being able to scale when desired is huge.

Should I Choose IaaS or PaaS?

When deploying workloads in Azure, it is important to determine whether or not an application or service should be run using Platform as a Service (PaaS) or a Virtual Machine (IaaS). For example, let’s say you are porting an application into Azure that runs on SQL. Do we want to build a Virtual Machine and install SQL or do we want to just leverage Azure’s PaaS services and just use one of their SQL instances? There are many factors in deciding whether or not to use PaaS or IaaS but one of the biggest is, how much control do you require for your application to run effectively. Do you need to make a lot of changes to the registry and do you require many tweaks within the SQL install? If so, then the virtual machine route would seem a better fit.

How To Choose The Right Virtual Machine Type

In Azure, the Virtual Machine resource specifications are cookie cutter. You don’t get to customize down to the details of how much CPU and Memory you want. They come in an offering of different sizes and you have to make those resource templates work for your computing needs. Making sure the correct size of VM is selected is crucial in Azure, not only because of performance implications for your applications but also because of the pricing. You don’t want to be paying more for a VM that is too large for your workloads.

Make sure you do your homework to determine which size is right for your needs. Also, pay close attention to i/o requirements. Storage is almost always the most common performance killer, so do your due diligence and make sure you’re getting the VM with the proper IOPS (Input/Output Operations per  Second) requirements. For Windows licensing, Microsoft covers the license and the Client Access License if you’re running a VM that needs CALs. For Linux VMs the licensing differs per the distribution.

Before we go and create a Virtual Machine inside Azure, let’s go over one of the gotchas that you might run into if you’re not aware. In Azure, since everything is “pay as you go”, if you’re not aware of the pricing at all times, you or your company may be getting a hefty bill from Microsoft. One of the common mistakes with VMs is that If you don’t completely remove your VM you can still get a charge. Simply just shutting down the VM will not stop the meter from running – you’re still reserving the hardware space from Microsoft so you’ll still be billed. Also when you delete the VM, you are going to have to delete the managed disk as well separately. The VM itself is not the only cost applied when running virtual machines.

Getting Started – Creating the Virtual Network

We will now demonstrate how to configure a Virtual Machine on Azure and getting connected to it. First, we will need to create the virtual networking so that the VM has some sort of network to talk out on. Afterward, we will create the Network Security Group which is like the “firewall” to the VM, and then finally we will create the VM itself. To create the Virtual Network, log into the Azure Portal and select “Create a Resource”. Then click on Networking > Virtual Network:

Azure Virtual Machines

Now we can specify the settings for our Virtual Network. First, we’ll give it a name. I’ll call mine “LukeLabVnet1”. I’ll leave the address space default here but we could make it smaller if we chose too. Then we will select our subscription type. You can use multiple subscriptions for different purposes, like a Development subscription and a Production subscription. Resource groups are a way for you to manage and group together your Azure resources for billing, monitoring, and to access control purposes. We already have a resource group created for this VM and its components so I will go ahead and select that. If we wanted, we could create a new one on the fly here. Then, we fill in the time zone which is Eastern for me. Next, we’ll give the subnet a name because we can create multiple subnets on this virtual network later, I’ll call it “LukeLabSubnet”. I’ll leave the default Address space for the subnet out since we are just configuring one VM and setting up access to it. Once we are done we will hit “create:

Now, to get to our newly created Virtual Network, on the left-hand side of the portal we select “Virtual Networks” and click on the one we just deployed:

We can configure all of our settings for our Virtual Network here. However, for the simplicity of the demonstration we will leave everything how it is for now:

Now that we have our virtual network in place, we will need to create our Network Security Group and then finally deploy our VM which will we do in part 2 of this series. As you can see there are a lot of components to learn when deploying VMs in Azure.

Comments/Feedback?

If you’re unsure about anything stated here let me know in the comments below and I’ll try to explain it better.

Have you tried Azure Virtual Machines? Let us know your verdict!

OpenShift on OpenStack aims to ease VM, container management

Virtualization admins increasingly use containers to secure applications, but managing both VMs and containers in the same infrastructure presents some challenges.

IT can spin containers up and down more quickly than VMs, and they require less overhead, so there are several practical uses cases for the technology. Security can be a concern, however, because all containers share the same underlying OS. As such, mission-critical applications are still better suited to VMs.

Using both containers and VMs can be helpful, because they each have their place. Still, adding containers to a traditional virtual infrastructure adds another layer of complexity and management for admins to contend with. The free and open source OpenStack provides infrastructure as a service and VM management, and organizations can run Red Hat’s OpenShift on OpenStack — and other systems — for platform as a service and container management.

Here, Brian Gracely, director of OpenShift product strategy at Red Hat, based in Raleigh, N.C., explains how to manage VMs and containers, and he shares how OpenShift on OpenStack can help.

What are the top challenges of managing both VMs and containers in virtual environments?

Brian Gracely, director of OpenShift product strategy at Red HatBrian Gracely

Brian Gracely: The first one is really around people and existing processes. You have infrastructure teams who, over the years, have become very good at managing VMs and … replicating servers with VMs, and they’ve built a set of operational things around that. When we start having the operations team deal with containers, a couple of things are different. Not all of them are as fluent in Linux as you might expect; containers are [based on] the OS. A lot of the virtualization people, especially in the VMware world, came from a Windows background. So, they have to learn a certain amount about what to do with the OS and how to deal with Linux constructs and commands.

Container environments tend to be more closely tied to people who are doing application developments. Application developers are … making changes to the application more frequently and scaling them up and down. The concept of the environment changing more frequently is sort of new for VM admins.

What is the role of OpenStack in modern data centers where VMs and containers coexist?

Gracely: OpenStack can become either an augmentation of what admins used to do with VMware or a replacement for VMware that gives them all of the VM capabilities they want to have in terms of networking, storage and so forth. In most of those cases, they want to also have hybrid capabilities, across public and private. And they can use OpenShift on OpenStack as that abstraction layer that allows them to run containerized applications and/or VM applications in their own data center.

Then, they’ll run OpenShift in one of the public clouds — Amazon or Azure or Google — and the applications that run in the cloud will end up being containerized on OpenShift. It gives them consistency from what the operations look like, and then there’s a pretty simple way of determining which applications can also run in the public cloud, if necessary.

What OpenShift features are most important to container management?

Gracely: OpenShift is based on Kubernetes technology — the de facto standard for managing containers.

If you’re a virtualization person … it’s essentially like vCenter for containers. It centrally manages policies, it centrally manages deployments of containers, [and] it makes sure that you use your compute resources really efficiently. If a container dies, an application dies, it’s going to be constantly monitoring that and will restart it automatically. Kubernetes at the core of OpenShift is the thing that allows people to manage containers at scale, as opposed to managing them one by one.

What can virtualization admins do to improve their container management skills?

Gracely: Become Linux-literate, Linux-skilled. There are plenty of courses out there that allow you to get familiar with Linux. Container technology, fundamentally, is Linux technology, so that’s a fundamental thing. There are tools like Katacoda, which is an online training system; you just go in through your browser. It gives you a Kubernetes environment to play around with, and there’s also an OpenShift set of trainings and tools that are on there.

Kubernetes is the thing that allows people to manage containers at scale, as opposed to managing them one by one.
Brian Gracelydirector of OpenShift product strategy at Red Hat

How can admins streamline management practices between other systems for VMs and OpenShift for containers?

Gracely: OpenShift runs natively on top of both VMware and OpenStack, so for customers that just want to stay focused on VMs, their world can look pretty much the way it does today. They’re going to provision however many VMs they need, and then give self-service access to the OpenShift platform and allow their developers to place containers on there as necessary. The infrastructure team can simply make sure that it’s highly available, that it’s patched, and if more capacity is necessary, add VMs.

Where we see … things get more efficient is people who don’t want to have silos anymore between the ops team and the development team. They’re either going down a DevOps path or combining them together; they want to merge processes. This is where we see them doing much more around automating environments. So, instead of just statically [building] a bunch of VMs and leaving them alone, they’re using tools like Ansible to provision not only the VMs, but the applications that go on top of those VMs and the local database.

Will VMs and containers continue to coexist, or will containers overtake VMs in the data center?

Gracely: More and more so, we’re seeing customers taking a container-first approach with new applications. But … there’s always going to be a need for good VM management, being able to deliver high performance, high I/O stand-alone applications in VMs. We very much expect to see a lot of applications stay in VMs, especially ones that people don’t expect to need any sort of hybrid cloud environment for, some large databases for I/O reasons, or [applications that], for whatever reason, people don’t want to put in containers. Then, our job is to make sure that, as containers come in, that we can make that one seamless infrastructure.

How to Resize Virtual Hard Disks in Hyper-V 2016

We get lots of cool tricks with virtualization. Among them is the ability to change our minds about almost any provisioning decision. In this article, we’re going to examine Hyper-V’s ability to resize virtual hard disks. Both Hyper-V Server (2016) and Client Hyper-V (Windows 10) have this capability.

Requirements for Hyper-V Disk Resizing

If we only think of virtual hard disks as files, then we won’t have many requirements to worry about. We can grow both VHD and VHDX files easily. We can shrink VHDX files fairly easily. Shrinking VHD requires more effort. This article primarily focuses on growth operations, so I’ll wrap up with a link to a shrink how-to article.

You can resize any of Hyper-V’s three layout types (fixed, dynamically expanding, and differencing). However, you cannot resize an AVHDX file (a differencing disk automatically created by the checkpoint function).

If a virtual hard disk belongs to a virtual machine, the rules change a bit.

  • If the virtual machine is Off, any of its disks can be resized (in accordance with the restrictions that we just mentioned)
  • If the virtual machine is Saved or has checkpoints, none of its disks can be resized
  • If the virtual machine is Running, then there are additional restrictions for resizing its virtual hard disks

Can I Resize a Hyper-V Virtual Machine’s Virtual Hard Disks Online?

A very important question: do you need to turn off a Hyper-V virtual machine to resize its virtual hard disks? The answer: sometimes.

  • If the virtual disk in question is the VHD type, then no, it cannot be resized online.
  • If the virtual disk in question belongs to the virtual IDE chain, then no, you cannot resize the virtual disk while the virtual machine is online.
  • If the virtual disk in question belongs to the virtual SCSI chain, then yes, you can resize the virtual disk while the virtual machine is online.

rv_idevscsi

Does Online VHDX Resize Work with Generation 1 Hyper-V VMs?

The generation of the virtual machine does not matter for virtual hard disk resizing. If the virtual disk is on the virtual SCSI chain, then you can resize it online.

Does Hyper-V Virtual Disk Resize Work with Linux Virtual Machines?

The guest operating system and file system do not matter. Different guest operating systems might react differently to a resize event, and the steps that you take for the guest’s file system will vary. However, the act of resizing the virtual disk does not change.

Do I Need to Connect the Virtual Disk to a Virtual Machine to Resize It?

Most guides show you how to use a virtual machine’s property sheet to resize a virtual hard disk. That might lead to the impression that you can only resize a virtual hard disk while a virtual machine owns it. Fortunately, you can easily resize a disconnected virtual disk. Both PowerShell and the GUI provide suitable methods.

How to Resize a Virtual Hard Disk with PowerShell

PowerShell is the preferred method for all virtual hard disk resize operations. It’s universal, flexible, scriptable, and, once you get the hang of it, much faster than the GUI.

The cmdlet to use is Resize-VHD. As of this writing, the documentation for that cmdlet says that it operates offline only. Ignore that. Resize-VHD works under the same restrictions outlined above.

The VHDX that I used in the sample began life at 20GB. Therefore, the above cmdlet will work as long as I did at least one of the following:

  • Left it unconnected
  • Connected it to the VM’s virtual SCSI controller
  • Turned the connected VM off

Notice the gb suffix on the SizeBytes parameter. PowerShell natively provides that feature; the cmdlet itself has nothing to do with it. PowerShell will automatically translate suffixes as necessary. Be aware that 1kb equals 1,024, not 1,000 (although both b and B both mean “byte”).

Had I used a number for SizeBytes smaller than the current size of the virtual hard disk file, I might have had some trouble. Each VHDX has a specific minimum size dictated by the contents of the file. See the discussion on shrinking at the end of this article for more information. Quickly speaking, the output of Get-VHD includes a MinimumSize field that shows how far you shrink the disk without taking additional actions.

This cmdlet only affects the virtual hard disk’s size. It does not affect the contained file system(s). That’s a separate step.

How to Resize a Disconnected Virtual Hard Disk with Hyper-V Manager

Hyper-V Manager allows you to resize a virtual hard disk whether or not a virtual machine owns it.

  1. From the main screen of Hyper-V Manager, first, select a host in the left pane. All VHD/X actions are carried out by the hypervisor’s subsystems, even if the target virtual hard disk does not belong to a specific virtual machine. Ensure that you pick a host that can reach the VHD/X. If the file resides on SMB storage, delegation may be necessary.
  2. In the far right Actions pane, click Edit Disk.
    rv_actionseditdisk
  3. The first page is information. Click Next.
  4. Browse to (or type) the location of the disk to edit.
    rv_browse
  5. The directions from this point are the same as for a connected disk, so go to the next section and pick up at step 6.

Note: Even though these directions specify disconnected virtual hard disks, they can be used on connected virtual disks. All of the rules mentioned earlier apply.

How to Resize a Virtual Machine’s Virtual Hard Disk with Hyper-V Manager

Hyper-V Manager can also resize virtual hard disks that are attached to virtual machines.

  1. If the virtual hard disk is attached to the VM’s virtual IDE controller, turn off the virtual machine. If the VM is saved, start it.
  2. Open the virtual machine’s Settings dialog.
  3. In the left pane, choose the virtual disk to resize.
  4. In the right pane, click the Edit button in the Media block.
    rv_vmsettingsedit
  5. The wizard will start by displaying the location of the virtual hard disk file, but the page will be grayed out. Otherwise, it will look just like the screenshot from step 4 of the preceding section. Click Next.
  6. Choose to Expand or Shrink (VHDX only) the virtual hard disk. If the VM is off, you will see additional options. Choose the desired operation and click Next.
    rv_exorshrink
  7. If you chose Expand, it will show you the current size and give you a New Size field to fill in. It will display the maximum possible size for this VHD/X’s file type. All values are in GB, so you can only change in GB increments (use PowerShell if that’s not acceptable).
    rv_expandIf you chose Shrink (VHDX only), it will show you the current size and give you a New Size field to fill in. It will display the minimum possible size for this file, based on the contents. All values are in GB, so you can only change in GB increments (use PowerShell if that’s not acceptable).
    rv_expand
    Enter the desired size and click Next.
  8. The wizard will show a summary screen. Review it to ensure accuracy. Click Finish when ready.

The wizard will show a progress bar. That might happen so briefly that you don’t see it, or it may take some time. The variance will depend on what you selected and the speed of your hardware. Growing fixed disks will take some time; shrinking disks usually happens almost instantaneously. Assuming that all is well, you’ll be quietly returned to the screen that you started on.

Following Up After a Virtual Hard Disk Resize Operation

When you grow a virtual hard disk, only the disk’s parameters change. Nothing happens to the file system(s) inside the VHD/X. For a growth operation, you’ll need to perform some additional action. For a Windows guest, that typically means using Disk Management to extend a partition:

rv_extend

Note: You might need to use the Rescan Disks operation on the Action menu to see the added space.

Of course, you could also create a new partition (or partitions) if you prefer.

I have not performed this operation on any Linux guests, so I can’t tell you exactly what to do. The operation will depend on the file system and the tools that you have available. You can probably determine what to do with a quick Internet search.

VHDX Shrink Operations

I didn’t talk much about shrink operations in this article. Shrinking requires you to prepare the contained file system(s) before you can do anything in Hyper-V. You might find that you can’t shrink a particular VHDX at all. Rather than muddle this article will all of the necessary information, I’m going to point you to an earlier article that I wrote on this subject. That article was written for 2012 R2, but nothing has changed since then.

What About VHD/VHDX Compact Operations?

I often see confusion between shrinking a VHD/VHDX and compacting a VHD/VHDX. These operations are unrelated. When we talk about resizing, then the proper term for reducing the size of a virtual hard disk is “shrink”. “Compact” refers to removing the zeroed blocks of a dynamically expanding VHD/VHDX so that it consumes less space on physical storage. Look for a forthcoming article on that topic.

Windows Server hardening still weighs heavily on admins

In these heady times of software-defined technologies and container virtualization, many IT professionals continue to grapple with an issue that has persisted since the advent of the server: security.

Ever since businesses discovered the advantages of sharing resources in a client-server arrangement, there have also been intruders attempting to bypass the protections at the perimeter of the network. These attackers angle for any weak point — outdated protocols, known vulnerabilities in unpatched systems — or go the direct route and deliver a phishing email in the hopes that a user will click on a link to unleash a malicious payload onto the network.

Windows Server hardening remains top of mind for most admins. Just as there are many ways to infiltrate a system, there are multiple ways to blunt those attacks. The following compilation highlights the most-viewed tutorials on SearchWindowsServer in 2017, several of which addressed the ways IT can reduce exposure to a server-based attack.

5. Manage Linux servers with a Windows admin’s toolkit

While not every Windows administrator is comfortable away from the familiarity of point-and-click GUI management tools, more in IT are taking cues from the world of DevOps to implement automation routines.

It took a while, but Microsoft eventually realized that spurning Linux also steered away potential customers. About 40% of the workloads on the Azure platform run some variation of Linux, Microsoft is a Platinum member of the Linux Foundation, and the company released SQL Server for Linux in September.

Many Windows shops now have a sprinkling of servers that use the open source operating system, and those administrators must figure out the best way to manage and monitor those Linux workloads. The cross-platform PowerShell Core management and automation tool promises to address this need, but until the offering reaches full maturity, this tip provides several options to help address the heterogeneous nature of many environments.

4. Disable SMB v1 for further Windows Server hardening

Unpatched Windows systems are tempting targets for ransomware and the latest malware du jour, Bitcoin miners.

A layered security approach helps, but it’s even better to pull out threat enablers by the roots to blunt future attacks. Long before the spate of cyberattacks in early 2017 that hinged on an exploit in Server Message Block (SMB) v1 that locked up thousands of Windows machines around the world, administrators had been warned to disable the outdated protocol. This tip details the techniques to search for signs of SMB v1 and how to extinguish it from the data center.

3. Microsoft LAPS puts a lock on local admin passwords

For the sake of convenience, many Windows shops will use the same administrator password on each machine. While this practice helps administrators with the troubleshooting or configuration process, it’s also tremendously insecure. If that credential falls into the wrong hands, an intruder can roam through the network until they obtain ultimate system access — domain administrator privileges. Microsoft introduced its Local Administrator Password Solution (LAPS) in 2015 to help Windows Server hardening efforts. This explainer details the underpinnings of LAPS and how to tune it for your organization’s needs.

2. Chocolatey sweetens software installations on servers

While not every Windows administrator is comfortable away from the familiarity of point-and-click GUI management tools, more in IT are taking cues from the world of DevOps to implement automation routines. Microsoft offers a number of tools to install applications, but a package manager helps streamline this process through automated routines that pull in the right version of the software and make upgrades less of a chore. This tip walks administrators through the features of the Chocolatey package manager, ways to automate software installations and how an enterprise with special requirements can develop a more secure deployment method.

1. Reduce risks through managed service accounts

Most organizations employ service accounts for enterprise-grade applications such as Exchange Server or SQL Server. These accounts provide the necessary elevated authorizations needed to run the program’s services. To avoid downtime, quite often administrators either do not set an expiration date on a service account password or will use the same password for each service account. Needless to say, this procedure makes less work for an industrious intruder to compromise a business. A managed service account automatically generates new passwords to remove the need for administrative intervention. This tip explains how to use this feature to lock down these accounts as part of IT’s overall Windows Server hardening efforts.