VMware has introduced features that improve the use of its NSX network virtualization and security software in private and public clouds.
At VMworld 2018 in Las Vegas, VMware unveiled an NSX instance for AWS Direct Connect and technology to apply NSX security policies on Amazon Web Services workloads. Also, VMware said Arista Networks’ virtual and physical switches would enforce NSX policies — the result of a collaboration between the two vendors.
The latest AWS feature is in NSX-T Data Center 2.3, which VMware introduced at VMworld. Other features added to the newest version of NSX-T include support for containers and Linux-based workloads running on bare-metal servers. NSX-T uses Open vSwitch to turn a Linux host into an NSX-T transport node and to provide stateful security services.
VMware plans to release NSX-T 2.3 by November.
NSX on AWS Direct Connect
To help companies connect to AWS, VMware introduced integration between NSX and AWS Direct Connect. The combination will provide NSX-powered connectivity between workloads running on VMware Cloud on AWS and those running on a VMware-based private cloud in the data center.
AWS Direct Connect lets companies bypass the public internet and establish a dedicated network connection between a data center and an AWS location. Direct Connect is particularly useful for companies with rules against transferring sensitive data across the public internet.
Finally, VMware introduced interoperability between Arista’s CloudVision and NSX. As a result, companies can have NSX security policies enforced on Arista switches running either virtually in a public cloud or the data center.
Arista CloudVision manages switching fabrics within multiple cloud environments. Last year, the company released a virtualized version of its EOS network operating system for AWS, Google Cloud Platform, Microsoft Azure and Oracle Cloud.
VMware is using its NSX portfolio to connect and secure infrastructure and applications running in the data center, branch office and public cloud. For the branch office, VMware has integrated NSX with the company’s VeloCloud software-defined WAN to provide microsegmentation for applications at the WAN’s edge.
VMware competes in multi-cloud networking with Cisco and Juniper Networks.
Set-VMProcessor has several other parameters which you can view in its online help.
As shown above, the first parameter is positional, meaning that it guesses that I supplied a virtual machine’s name because it’s in the first slot and I didn’t tell it otherwise. For interactive work, that’s fine. In scripting, try to always fully-qualify every parameter so that you and other maintainers don’t need to guess:
In order for nested virtualization to work, you must meet all of the following:
The Hyper-V host must be at least the Anniversary Edition version of Windows 10, Windows Server 2016, Hyper-V Server 2016, or Windows Server Semi-Annual Channel
The Hyper-V host must be using Intel CPUs. AMD is not yet supported
A virtual machine must be off to have its processor extensions changed
No configuration changes are necessary for the host.
Microsoft only guarantees that you can run Hyper-V nested within Hyper-V. Other hypervisors may work, but you will not receive support either way. You may have mixed results trying to run different versions of Hyper-V. I am unaware of any support statement on this, but I’ve had problems running mismatched levels of major versions.
Memory Changes for Nested Virtual Machines
Be aware that a virtual machine with virtualization extensions exposed will always use its configured value for Startup memory. You cannot use Dynamic Memory, nor can you change the virtual machine’s fixed memory while it is running.
Remember, as always, I’m here to help, so send me any questions you have on this topic using the question form below and I’ll get back to you as soon as I can.
Array Networks Inc. has introduced an upgrade of its network functions virtualization hardware. New features in the AVX NFV appliance, which provides application delivery, security and other networking operations, include support for 40 GbE interfaces and higher throughput for encrypted traffic.
Array, based in Milpitas, Calif., launched the AVX5800, AVX7800 and AVX9800 appliances this week. Along with support for optional 40 GbE network interface cards (NICs), the latest hardware provides a significant improvement in elliptic curve cryptography (ECC) processing over a Secure Sockets Layer virtual private network (SSL VPN).
The new NFV appliances include Array’s latest software release, AVX 2.7. The upgrade provides better fine-tuning of system resources for virtualized network functions running on the platform. Other improvements include the ability to back up and restore AVX configurations and images via USB and an online image repository for software running on AVX appliances.
Array has also added enhancements for companies using the NFV appliance with OpenStack environments. The company has introduced a hypervisor driver that lets the AVX platform serve as an OpenStack compute node.
The AVX NFV platform, launched in May 2017, comprises a series of virtualized servers for running Array and third-party applications, such as Fortinet’s FortiGate next-generation firewall and Positive Technologies’ PT AF web application firewall.
A10 Harmony Controller Update
A10 has launched an upgrade to its Harmony Controller, an application delivery controller, or ADC, that is also a cloud management, orchestration and analytics engine.
A10, based in San Jose, Calif., released Harmony version 4.1 last week, adding improvements to the product’s ability to configure and manage policies across A10’s line of Thunder security appliances.
New features in Harmony include preloaded Thunder ADC services. Also added to the controller is a self-service app for Thunder SSL inspection, which decrypts traffic, so security devices can analyze it.
Other improvements include extending Harmony’s analytics history to 12 months, so network operators and security pros can go further back in time when investigating events.
Harmony is a cloud-optimized ADC that can spin up specific services anywhere in a hybrid cloud environment. The software also incorporates per-application analytics and centrally manages and orchestrates application services.
Aviatrix improves its AWS security
Aviatrix has added to its AVX network security software better control over traffic leaving Amazon Web Services. The enhancements provide customers with stronger protection against internal threats and external attacks.
The new AVX capability announced last week focuses on filtering egress data from an AWS virtual private cloud (VPC). An AWS VPC provides a private cloud computing environment on the infrastructure-as-a-service provider’s platform. The benefit of a VPC is the granular control a company can get over a virtual network service serving sensitive workloads.
AVX for AWS VPCs verifies the traffic destination’s IP address, hostname or website, the vendor, based in Palo Alto, Calif., said. An inline, software-controlled AVX Gateway does the VPC filtering and prevents traffic from going to unauthorized locations.
The Aviatrix platform, which comprises a controller and gateway, operates over a network overlay that spans cloud and data center environments. The new VPC egress security feature is available as part of the platform, which is available only as software.
Companies can deploy the Aviatrix product through the AWS marketplace. Aviatrix also has versions of its technology for Microsoft Azure and Google Cloud.
As the desktop virtualization market evolves, the long-standing competition between Citrix and VMware leaves IT pros with a difficult decision when choosing a product.
Citrix XenDesktop and VMware Horizon each hold their own appeal to different customers. Some are drawn to the security features and graphics-related innovations of XenDesktop, while others see Horizon as more cutting edge.
“[Citrix] was the standard for a long time,” said Zeus Kerravala, founder and principal analyst at ZK Research. “Over the last few years, VMware really has put their foot on the pedal as far as innovation goes.”
A closer look at XenDesktop vs. Horizon
XenDesktop — now transitioning to the name Citrix Virtual Desktop — accounts for 57.7% of on-premises VDI deployments compared to Horizon’s 26.9%, according to Login VSI and Frame’s “State of the EUC 2018” report released in May.
VMware is doing what it can to close the gap, however. In 2015, the company introduced Instant Clone, a feature that allows IT to clone a VM while it’s running. VMware also added Blast Extreme, a proprietary remote display protocol, in 2016.
VMware has also improved its standing in the XenDesktop vs. Horizon debate by integrating with F5 Networks to improve network performance in the last couple of years. XenDesktop uses Citrix’s proprietary product, NetScaler, which is not on par with F5’s offering, Kerravala said.
The University of Arkansas chose VMware Horizon over XenDesktop about a year and a half ago because of Horizon’s emphasis on the cloud.
“The Citrix solution felt like it was an evolution of on-prem,” said Jon Kelley, associate director of enterprise innovation at the university. “The VMware technology was more for the cloud-type stuff with disposable infrastructure. We wanted to be a little more forward-thinking and be software-defined.”
Horizon also appeals to customers because of its integration with other VMware products, according to Sheldon D’Paiva, director of product marketing at VMware.
“Workspace One has all the best-of-breed pieces integrated with it,” D’Paiva said. “It brings together VDI with Horizon and identity and access management so you can have single sign-on for all your apps. VMware can provide everything from the storage layer, the hypervisor, the broker … We can provide the entire stack.”
XenDesktop’s innovation has been strong when it comes to security improvements around consolidation and encryption, said Jeff Kater, director of information technology at the Kansas Development Finance Authority, a corporate finance entity that uses XenDesktop. Citrix integrated XenDesktop on top of an open API stack, which enables more secure browsing, among other benefits, Kater said.
“[Citrix’s] API stack on [Bitdefender Hypervisor Introspection] allows me to have file protection baked into the image, but also have memory introspection actually living one layer beneath the hypervisor, and so, virtually, we’re impenetrable,” he said. “I deny all rogue access. I was only able to get that with XenDesktop.”
Security is one of several aspects of Citrix’s virtual app and desktop offering that draw customers, according to Thomas Berger, senior product marketing manager at Citrix.
“It offers the best user experience for any application and user data over any network and on any device,” Berger said. “Its context-aware security is stronger and more flexible. And its specialized built-in support and management tools make management simpler and more efficient and agile.”
Citrix is also moving ahead when it comes to delivering graphics to virtual desktops. The company allows GPU-accelerated VMs to float dynamically among hosts so IT pros don’t have to shut down a VM when it moves to a new host. VMware offers a similar capability in vSphere 6.7 where IT pros can suspend desktop sessions to migrate GPU-accelerated VMs from one server to another.
Ultimately, though, it’s the customer-first approach that attracted Kater to XenDesktop, he said.
“They want to make sure you’re a happy customer,” Kater said. “If the customer has a request, Citrix works overtime trying to fix that and resolve that in future releases.”
Customer support is a strong element of VMware’s offering, as well, Kelley said.
“[VMware was] invested in making sure the vision they sold us was actually what we were going to get,” he said. “They focus on how to make it easier for people to get to the data and the stuff they need to get the work done.”
Is change in the air?
The XenDesktop vs. Horizon battle could shift thanks to a confluence of factors experts said. For starters, last summer, Citrix went through yet another CEO change — its third in five years.
“We do look for strength in the company as a point for how we make some of these decisions,” Kelley said. “[VMware] actually had a fully fleshed out strategy for where the thing was going.”
Zeus Kerravalafounder and principal analyst, ZK Research
In addition, XenDesktop 7.0 reached its end of life on June 30, 2018. When a product reaches the end of its mainstream support, it’s a natural opportunity for customers to consider other options, Kerravala said.
“That opens the door for customers to go,” he said. “When you combine the uncertainty with the company [Citrix] with the end of the support for products, fiscally responsible [companies] are going to take a look around and see what else is there.”
The changes don’t mean that existing customers are running scared, however.
“Everything has fit the bill wonderfully for us,” Kater said. “Citrix is in a good spot. And as long as it continues to innovate, people will take note, and that will smooth things over.”
Still, Kater keeps an eye on how a major change at Citrix would affect his users.
“If something goes end of life, if they sell off a part of their company, I want to make sure that every product they put into production has the ability — with a Citrix tool — to export my images, my products, into a kernel to any other of the major players and [that] they do that,” Kater said.
Azure Virtual Machines make an already hugely flexible technology in virtualization even more adaptable through remote hosting.
Virtual machines are a part of Azure’s Infrastructure as a Service (IaaS) offering that allows you to have the flexibility of virtualization without having to invest in the underlying infrastructure. In simpler words, you are paying Microsoft to run a Virtual Machine of your choosing in their Azure environment while they provide you access to the VM.
One of the biggest misconceptions I see in the workplace is that managing Cloud Infrastructure is the same as or very similar to managing on-premise infrastructure. THIS IS NOT TRUE. Cloud Infrastructure is a whole new ball game. It can be a great tool in our back pockets for certain scenarios but only if used correctly. This blog series will explain how you can determine if a workload is suitable for an Azure VM and how to deploy it properly.
Why Use Azure Virtual Machines Over On-Premise Equipment?
One of the biggest features of the public cloud is its scalability. If you write an application and need to scale up the resources dramatically for a few days, you can create a VM in Azure, install your application, run it in there and turn it off when done. You only pay for what you use. If you haven’t already invested in your own physical environment this is a very attractive alternative. The agility this solution provides software developers is on a whole new level compared to before and enables companies to become more efficient at creating applications, and being able to scale when desired is huge.
Should I Choose IaaS or PaaS?
When deploying workloads in Azure, it is important to determine whether or not an application or service should be run using Platform as a Service (PaaS) or a Virtual Machine (IaaS). For example, let’s say you are porting an application into Azure that runs on SQL. Do we want to build a Virtual Machine and install SQL or do we want to just leverage Azure’s PaaS services and just use one of their SQL instances? There are many factors in deciding whether or not to use PaaS or IaaS but one of the biggest is, how much control do you require for your application to run effectively. Do you need to make a lot of changes to the registry and do you require many tweaks within the SQL install? If so, then the virtual machine route would seem a better fit.
How To Choose The Right Virtual Machine Type
In Azure, the Virtual Machine resource specifications are cookie cutter. You don’t get to customize down to the details of how much CPU and Memory you want. They come in an offering of different sizes and you have to make those resource templates work for your computing needs. Making sure the correct size of VM is selected is crucial in Azure, not only because of performance implications for your applications but also because of the pricing. You don’t want to be paying more for a VM that is too large for your workloads.
Make sure you do your homework to determine which size is right for your needs. Also, pay close attention to i/o requirements. Storage is almost always the most common performance killer, so do your due diligence and make sure you’re getting the VM with the proper IOPS (Input/Output Operations per Second) requirements. For Windows licensing, Microsoft covers the license and the Client Access License if you’re running a VM that needs CALs. For Linux VMs the licensing differs per the distribution.
Before we go and create a Virtual Machine inside Azure, let’s go over one of the gotchas that you might run into if you’re not aware. In Azure, since everything is “pay as you go”, if you’re not aware of the pricing at all times, you or your company may be getting a hefty bill from Microsoft. One of the common mistakes with VMs is that If you don’t completely remove your VM you can still get a charge. Simply just shutting down the VM will not stop the meter from running – you’re still reserving the hardware space from Microsoft so you’ll still be billed. Also when you delete the VM, you are going to have to delete the managed disk as well separately. The VM itself is not the only cost applied when running virtual machines.
Getting Started – Creating the Virtual Network
We will now demonstrate how to configure a Virtual Machine on Azure and getting connected to it. First, we will need to create the virtual networking so that the VM has some sort of network to talk out on. Afterward, we will create the Network Security Group which is like the “firewall” to the VM, and then finally we will create the VM itself. To create the Virtual Network, log into the Azure Portal and select “Create a Resource”. Then click on Networking > Virtual Network:
Now we can specify the settings for our Virtual Network. First, we’ll give it a name. I’ll call mine “LukeLabVnet1”. I’ll leave the address space default here but we could make it smaller if we chose too. Then we will select our subscription type. You can use multiple subscriptions for different purposes, like a Development subscription and a Production subscription. Resource groups are a way for you to manage and group together your Azure resources for billing, monitoring, and to access control purposes. We already have a resource group created for this VM and its components so I will go ahead and select that. If we wanted, we could create a new one on the fly here. Then, we fill in the time zone which is Eastern for me. Next, we’ll give the subnet a name because we can create multiple subnets on this virtual network later, I’ll call it “LukeLabSubnet”. I’ll leave the default Address space for the subnet out since we are just configuring one VM and setting up access to it. Once we are done we will hit “create:
Now, to get to our newly created Virtual Network, on the left-hand side of the portal we select “Virtual Networks” and click on the one we just deployed:
We can configure all of our settings for our Virtual Network here. However, for the simplicity of the demonstration we will leave everything how it is for now:
Now that we have our virtual network in place, we will need to create our Network Security Group and then finally deploy our VM which will we do in part 2 of this series. As you can see there are a lot of components to learn when deploying VMs in Azure.
If you’re unsure about anything stated here let me know in the comments below and I’ll try to explain it better.
Have you tried Azure Virtual Machines? Let us know your verdict!
Virtualization admins increasingly use containers to secure applications, but managing both VMs and containers in the same infrastructure presents some challenges.
IT can spin containers up and down more quickly than VMs, and they require less overhead, so there are several practical uses cases for the technology. Security can be a concern, however, because all containers share the same underlying OS. As such, mission-critical applications are still better suited to VMs.
Using both containers and VMs can be helpful, because they each have their place. Still, adding containers to a traditional virtual infrastructure adds another layer of complexity and management for admins to contend with. The free and open source OpenStack provides infrastructure as a service and VM management, and organizations can run Red Hat’s OpenShift on OpenStack — and other systems — for platform as a service and container management.
Here, Brian Gracely, director of OpenShift product strategy at Red Hat, based in Raleigh, N.C., explains how to manage VMs and containers, and he shares how OpenShift on OpenStack can help.
What are the top challenges of managing both VMs and containers in virtual environments?
Brian Gracely: The first one is really around people and existing processes. You have infrastructure teams who, over the years, have become very good at managing VMs and … replicating servers with VMs, and they’ve built a set of operational things around that. When we start having the operations team deal with containers, a couple of things are different. Not all of them are as fluent in Linux as you might expect; containers are [based on] the OS. A lot of the virtualization people, especially in the VMware world, came from a Windows background. So, they have to learn a certain amount about what to do with the OS and how to deal with Linux constructs and commands.
Container environments tend to be more closely tied to people who are doing application developments. Application developers are … making changes to the application more frequently and scaling them up and down. The concept of the environment changing more frequently is sort of new for VM admins.
What is the role of OpenStack in modern data centers where VMs and containers coexist?
Gracely: OpenStack can become either an augmentation of what admins used to do with VMware or a replacement for VMware that gives them all of the VM capabilities they want to have in terms of networking, storage and so forth. In most of those cases, they want to also have hybrid capabilities, across public and private. And they can use OpenShift on OpenStack as that abstraction layer that allows them to run containerized applications and/or VM applications in their own data center.
Then, they’ll run OpenShift in one of the public clouds — Amazon or Azure or Google — and the applications that run in the cloud will end up being containerized on OpenShift. It gives them consistency from what the operations look like, and then there’s a pretty simple way of determining which applications can also run in the public cloud, if necessary.
What OpenShift features are most important to container management?
Gracely: OpenShift is based on Kubernetes technology — the de facto standard for managing containers.
If you’re a virtualization person … it’s essentially like vCenter for containers. It centrally manages policies, it centrally manages deployments of containers, [and] it makes sure that you use your compute resources really efficiently. If a container dies, an application dies, it’s going to be constantly monitoring that and will restart it automatically. Kubernetes at the core of OpenShift is the thing that allows people to manage containers at scale, as opposed to managing them one by one.
What can virtualization admins do to improve their container management skills?
Gracely: Become Linux-literate, Linux-skilled. There are plenty of courses out there that allow you to get familiar with Linux. Container technology, fundamentally, is Linux technology, so that’s a fundamental thing. There are tools like Katacoda, which is an online training system; you just go in through your browser. It gives you a Kubernetes environment to play around with, and there’s also an OpenShift set of trainings and tools that are on there.
Brian Gracelydirector of OpenShift product strategy at Red Hat
How can admins streamline management practices between other systems for VMs and OpenShift for containers?
Gracely: OpenShift runs natively on top of both VMware and OpenStack, so for customers that just want to stay focused on VMs, their world can look pretty much the way it does today. They’re going to provision however many VMs they need, and then give self-service access to the OpenShift platform and allow their developers to place containers on there as necessary. The infrastructure team can simply make sure that it’s highly available, that it’s patched, and if more capacity is necessary, add VMs.
Where we see … things get more efficient is people who don’t want to have silos anymore between the ops team and the development team. They’re either going down a DevOps path or combining them together; they want to merge processes. This is where we see them doing much more around automating environments. So, instead of just statically [building] a bunch of VMs and leaving them alone, they’re using tools like Ansible to provision not only the VMs, but the applications that go on top of those VMs and the local database.
Will VMs and containers continue to coexist, or will containers overtake VMs in the data center?
Gracely: More and more so, we’re seeing customers taking a container-first approach with new applications. But … there’s always going to be a need for good VM management, being able to deliver high performance, high I/O stand-alone applications in VMs. We very much expect to see a lot of applications stay in VMs, especially ones that people don’t expect to need any sort of hybrid cloud environment for, some large databases for I/O reasons, or [applications that], for whatever reason, people don’t want to put in containers. Then, our job is to make sure that, as containers come in, that we can make that one seamless infrastructure.
We get lots of cool tricks with virtualization. Among them is the ability to change our minds about almost any provisioning decision. In this article, we’re going to examine Hyper-V’s ability to resize virtual hard disks. Both Hyper-V Server (2016) and Client Hyper-V (Windows 10) have this capability.
Requirements for Hyper-V Disk Resizing
If we only think of virtual hard disks as files, then we won’t have many requirements to worry about. We can grow both VHD and VHDX files easily. We can shrink VHDX files fairly easily. Shrinking VHD requires more effort. This article primarily focuses on growth operations, so I’ll wrap up with a link to a shrink how-to article.
You can resize any of Hyper-V’s three layout types (fixed, dynamically expanding, and differencing). However, you cannot resize an AVHDX file (a differencing disk automatically created by the checkpoint function).
If a virtual hard disk belongs to a virtual machine, the rules change a bit.
If the virtual machine is Off, any of its disks can be resized (in accordance with the restrictions that we just mentioned)
If the virtual machine is Saved or has checkpoints, none of its disks can be resized
If the virtual machine is Running, then there are additional restrictions for resizing its virtual hard disks
Can I Resize a Hyper-V Virtual Machine’s Virtual Hard Disks Online?
A very important question: do you need to turn off a Hyper-V virtual machine to resize its virtual hard disks? The answer: sometimes.
If the virtual disk in question is the VHD type, then no, it cannot be resized online.
If the virtual disk in question belongs to the virtual IDE chain, then no, you cannot resize the virtual disk while the virtual machine is online.
If the virtual disk in question belongs to the virtual SCSI chain, then yes, you can resize the virtual disk while the virtual machine is online.
Does Online VHDX Resize Work with Generation 1 Hyper-V VMs?
The generation of the virtual machine does not matter for virtual hard disk resizing. If the virtual disk is on the virtual SCSI chain, then you can resize it online.
Does Hyper-V Virtual Disk Resize Work with Linux Virtual Machines?
The guest operating system and file system do not matter. Different guest operating systems might react differently to a resize event, and the steps that you take for the guest’s file system will vary. However, the act of resizing the virtual disk does not change.
Do I Need to Connect the Virtual Disk to a Virtual Machine to Resize It?
Most guides show you how to use a virtual machine’s property sheet to resize a virtual hard disk. That might lead to the impression that you can only resize a virtual hard disk while a virtual machine owns it. Fortunately, you can easily resize a disconnected virtual disk. Both PowerShell and the GUI provide suitable methods.
How to Resize a Virtual Hard Disk with PowerShell
PowerShell is the preferred method for all virtual hard disk resize operations. It’s universal, flexible, scriptable, and, once you get the hang of it, much faster than the GUI.
The cmdlet to use is Resize-VHD. As of this writing, the documentation for that cmdlet says that it operates offline only. Ignore that. Resize-VHD works under the same restrictions outlined above.
Resize-VHD-Path‘\svstore01VMsVirtual Hard Diskstest.vhdx’-SizeBytes30gb
The VHDX that I used in the sample began life at 20GB. Therefore, the above cmdlet will work as long as I did at least one of the following:
Left it unconnected
Connected it to the VM’s virtual SCSI controller
Turned the connected VM off
Notice the gb suffix on the SizeBytes parameter. PowerShell natively provides that feature; the cmdlet itself has nothing to do with it. PowerShell will automatically translate suffixes as necessary. Be aware that 1kb equals 1,024, not 1,000 (although both b and B both mean “byte”).
Had I used a number for SizeBytes smaller than the current size of the virtual hard disk file, I might have had some trouble. Each VHDX has a specific minimum size dictated by the contents of the file. See the discussion on shrinking at the end of this article for more information. Quickly speaking, the output of Get-VHD includes a MinimumSize field that shows how far you shrink the disk without taking additional actions.
This cmdlet only affects the virtual hard disk’s size. It does not affect the contained file system(s). That’s a separate step.
How to Resize a Disconnected Virtual Hard Disk with Hyper-V Manager
Hyper-V Manager allows you to resize a virtual hard disk whether or not a virtual machine owns it.
From the main screen of Hyper-V Manager, first, select a host in the left pane. All VHD/X actions are carried out by the hypervisor’s subsystems, even if the target virtual hard disk does not belong to a specific virtual machine. Ensure that you pick a host that can reach the VHD/X. If the file resides on SMB storage, delegation may be necessary.
In the far right Actions pane, click Edit Disk.
The first page is information. Click Next.
Browse to (or type) the location of the disk to edit.
The directions from this point are the same as for a connected disk, so go to the next section and pick up at step 6.
Note: Even though these directions specify disconnected virtual hard disks, they can be used on connected virtual disks. All of the rules mentioned earlier apply.
How to Resize a Virtual Machine’s Virtual Hard Disk with Hyper-V Manager
Hyper-V Manager can also resize virtual hard disks that are attached to virtual machines.
If the virtual hard disk is attached to the VM’s virtual IDE controller, turn off the virtual machine. If the VM is saved, start it.
Open the virtual machine’s Settings dialog.
In the left pane, choose the virtual disk to resize.
In the right pane, click the Edit button in the Media block.
The wizard will start by displaying the location of the virtual hard disk file, but the page will be grayed out. Otherwise, it will look just like the screenshot from step 4 of the preceding section. Click Next.
Choose to Expand or Shrink (VHDX only) the virtual hard disk. If the VM is off, you will see additional options. Choose the desired operation and click Next.
If you chose Expand, it will show you the current size and give you a New Size field to fill in. It will display the maximum possible size for this VHD/X’s file type. All values are in GB, so you can only change in GB increments (use PowerShell if that’s not acceptable). If you chose Shrink (VHDX only), it will show you the current size and give you a New Size field to fill in. It will display the minimum possible size for this file, based on the contents. All values are in GB, so you can only change in GB increments (use PowerShell if that’s not acceptable).
Enter the desired size and click Next.
The wizard will show a summary screen. Review it to ensure accuracy. Click Finish when ready.
The wizard will show a progress bar. That might happen so briefly that you don’t see it, or it may take some time. The variance will depend on what you selected and the speed of your hardware. Growing fixed disks will take some time; shrinking disks usually happens almost instantaneously. Assuming that all is well, you’ll be quietly returned to the screen that you started on.
Following Up After a Virtual Hard Disk Resize Operation
When you grow a virtual hard disk, only the disk’s parameters change. Nothing happens to the file system(s) inside the VHD/X. For a growth operation, you’ll need to perform some additional action. For a Windows guest, that typically means using Disk Management to extend a partition:
Note: You might need to use the Rescan Disks operation on the Action menu to see the added space.
Of course, you could also create a new partition (or partitions) if you prefer.
I have not performed this operation on any Linux guests, so I can’t tell you exactly what to do. The operation will depend on the file system and the tools that you have available. You can probably determine what to do with a quick Internet search.
VHDX Shrink Operations
I didn’t talk much about shrink operations in this article. Shrinking requires you to prepare the contained file system(s) before you can do anything in Hyper-V. You might find that you can’t shrink a particular VHDX at all. Rather than muddle this article will all of the necessary information, I’m going to point you to an earlier article that I wrote on this subject. That article was written for 2012 R2, but nothing has changed since then.
What About VHD/VHDX Compact Operations?
I often see confusion between shrinking a VHD/VHDX and compacting a VHD/VHDX. These operations are unrelated. When we talk about resizing, then the proper term for reducing the size of a virtual hard disk is “shrink”. “Compact” refers to removing the zeroed blocks of a dynamically expanding VHD/VHDX so that it consumes less space on physical storage. Look for a forthcoming article on that topic.
In these heady times of software-defined technologies and container virtualization, many IT professionals continue to grapple with an issue that has persisted since the advent of the server: security.
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
Ever since businesses discovered the advantages of sharing resources in a client-server arrangement, there have also been intruders attempting to bypass the protections at the perimeter of the network. These attackers angle for any weak point — outdated protocols, known vulnerabilities in unpatched systems — or go the direct route and deliver a phishing email in the hopes that a user will click on a link to unleash a malicious payload onto the network.
Windows Server hardening remains top of mind for most admins. Just as there are many ways to infiltrate a system, there are multiple ways to blunt those attacks. The following compilation highlights the most-viewed tutorials on SearchWindowsServer in 2017, several of which addressed the ways IT can reduce exposure to a server-based attack.
5. Manage Linux servers with a Windows admin’s toolkit
It took a while, but Microsoft eventually realized that spurning Linux also steered away potential customers. About 40% of the workloads on the Azure platform run some variation of Linux, Microsoft is a Platinum member of the Linux Foundation, and the company released SQL Server for Linux in September.
Many Windows shops now have a sprinkling of servers that use the open source operating system, and those administrators must figure out the best way to manage and monitor those Linux workloads. The cross-platform PowerShell Core management and automation tool promises to address this need, but until the offering reaches full maturity, this tip provides several options to help address the heterogeneous nature of many environments.
4. Disable SMB v1 for further Windows Server hardening
Unpatched Windows systems are tempting targets for ransomware and the latest malware du jour, Bitcoin miners.
A layered security approach helps, but it’s even better to pull out threat enablers by the roots to blunt future attacks. Long before the spate of cyberattacks in early 2017 that hinged on an exploit in Server Message Block (SMB) v1 that locked up thousands of Windows machines around the world, administrators had been warned to disable the outdated protocol. This tip details the techniques to search for signs of SMB v1 and how to extinguish it from the data center.
3. Microsoft LAPS puts a lock on local admin passwords
For the sake of convenience, many Windows shops will use the same administrator password on each machine. While this practice helps administrators with the troubleshooting or configuration process, it’s also tremendously insecure. If that credential falls into the wrong hands, an intruder can roam through the network until they obtain ultimate system access — domain administrator privileges. Microsoft introduced its Local Administrator Password Solution (LAPS) in 2015 to help Windows Server hardening efforts. This explainer details the underpinnings of LAPS and how to tune it for your organization’s needs.
2. Chocolatey sweetens software installations on servers
While not every Windows administrator is comfortable away from the familiarity of point-and-click GUI management tools, more in IT are taking cues from the world of DevOps to implement automation routines. Microsoft offers a number of tools to install applications, but a package manager helps streamline this process through automated routines that pull in the right version of the software and make upgrades less of a chore. This tip walks administrators through the features of the Chocolatey package manager, ways to automate software installations and how an enterprise with special requirements can develop a more secure deployment method.
1. Reduce risks through managed service accounts
Most organizations employ service accounts for enterprise-grade applications such as Exchange Server or SQL Server. These accounts provide the necessary elevated authorizations needed to run the program’s services. To avoid downtime, quite often administrators either do not set an expiration date on a service account password or will use the same password for each service account. Needless to say, this procedure makes less work for an industrious intruder to compromise a business. A managed service account automatically generates new passwords to remove the need for administrative intervention. This tip explains how to use this feature to lock down these accounts as part of IT’s overall Windows Server hardening efforts.
SAN FRANCISCO — Juniper Networks has extended its Contrail network virtualization platform to multicloud environments, competing with Cisco and VMware for the growing number of enterprises running applications across public and private clouds.
The Juniper Contrail Enterprise Multicloud, introduced this week at the company’s NXTWORK conference, is a single software console for orchestrating, managing and monitoring network services across applications running on cloud-computing environments. The new product, which won’t be available until early next year, would compete with the cloud versions of Cisco’s ACI and VMware’s NSX.
Also at the show, Juniper announced that it would contribute the codebase for OpenContrail, the open source version of the software-defined networking (SDN) overlay, to The Linux Foundation. The company said the foundation’s networking projects would help drive OpenContrail deeper into cloud ecosystems.
Contrail Enterprise Multicloud stems, in part, from the work Juniper has done over several years with telcos building private clouds, Juniper CEO Rami Rahim told analysts and reporters at the conference.
“It’s almost like a bad secret — how embedded we have been now with practically all — many — telcos around the world in helping them develop the telco cloud,” Rahim said. “We’ve learnt the hard way in some cases how this [cloud networking] needs to be done.”
Is Juniper’s technology enough to win?
Technologically, Juniper Contrail can compete with ACI and NSX, IDC analyst Brad Casemore said. “Juniper clearly has put considerable thought into the multicloud capabilities that Contrail needs to support, and, as you’d expect from Juniper, the features and functionality are strong.”
Brad Casemoreanalyst, IDC
However, Juniper will need more than good technology when competing for customers. A lot more enterprises use Cisco and VMware products in data centers than Juniper gear. Also, Cisco has partnered with Google to build strong technological ties with the Google Cloud Platform, and VMware has a similar deal with Amazon.
“Cisco and VMware have marketed their multicloud offerings aggressively,” Casemore said. “As such, Juniper will have to raise and sustain the marketing profile of Contrail Enterprise Multicloud.”
Networking with Juniper Contrail Enterprise Multicloud
Contrail Enterprise Multicloud comprises networking, security and network management. Companies can buy the three pieces separately, but the new product lets engineers manage the trio through the software console that sits on top of the centralized Contrail controller.
For networking in a private cloud, the console relies on a virtual network overlay built on top of abstracted hardware switches, which can be from Juniper or a third party. The system also includes a virtual router that provides links to the physical underlay and Layer 4-7 network services, such as load balancers and firewalls. Through the console, engineers can create and distribute policies that tailor the network services and underlying switches to the needs of applications.
Contrail Enterprise Multicloud capabilities within public clouds, including Amazon Web Services, Google Cloud Platform and Microsoft Azure, are different because the provider controls the infrastructure. Network operators use the console to program and control overlay services for workloads through the APIs made available by cloud providers. The Juniper software also uses native cloud APIs to collect analytics information.
Other Juniper Contrail Enterprise Multicloud capabilities
Network managers can use the console to configure and control the gateway leading to the public cloud and to define and distribute policies for cloud-based virtual firewalls.
Also accessible through the console is Juniper’s AppFormix management software for cloud environments. AppFormix provides policy monitoring and application and software-based infrastructure analytics. Engineers can configure the product to handle routine networking tasks.
The cloud-related work of Juniper, Cisco and VMware is a recognition that the boundaries of the enterprise data center are being redrawn. “Data center networking vendors are having to redefine their value propositions in a multicloud world,” Casemore said.
Indeed, an increasing number of companies are reducing the amount of hardware and software running in private data centers by moving workloads to public clouds. Revenue from cloud services rose almost 29% year over year in the first half of 2017 to more than $63 billion, according to IDC.
VMware has updated its version of NSX for non-vSphere environments, adding to the network virtualization software integration with the Pivotal Container Service and the latest iteration of Pivotal Cloud Foundry.
VMware introduced NSX-T 2.1 on Tuesday. Through NSX-T, Pivotal Container Service, or PKS, brings support for Kubernetes container clusters to vSphere, VMware’s virtualization platform for the data center. PCF is an open source cloud platform as a service (PaaS) that developers use to build, deploy, run and scale applications.
VMware developed the Cloud Foundry service that is the basis for PCF. Pivotal Software, whose parent company is Dell Technologies, now owns the PaaS, which Pivotal licenses under Apache 2.0.
VMware NSX-T was introduced early this year to provide networking and security management for non-vSphere application frameworks, OpenStack environments, and multiple KVM distributions.
Support for KVM underscores VMware’s recognition that the virtualization layer in Linux is a force in cloud environments. As a result, the vendor has to provide integration with vSphere for VMware to extend its technology beyond the data center.
Kubernetes cluster support in VMware NSX-T
VMware NSX-T integration with PKS is significant because of the extensive use of Kubernetes in public, private and hybrid cloud environments. Kubernetes, which Google developed, is used to automate the deployment, scaling, maintenance, and operation of multiple Linux-based containers across clusters of nodes. Google, VMware and Pivotal developed PKS.
VMware has said it plans to add Docker support in NSX-T. Docker is another popular open source software platform for application containers.
VMware NSX-T is a piece of the vendor’s strategy for spreading its technology across the branch, WAN, cloud computing environments, and security and networking in the data center. Essential to its networking plans is the acquisition of SD-WAN vendor VeloCloud, which VMware plans to complete by early next year.
VMware expects to use VeloCloud to take NSX into the branch and the WAN. “What VeloCloud offers is really NSX everywhere,” VMware CEO Pat Gelsinger told analysts last week, according to a transcript published by the financial site Seeking Alpha.
Gelsinger held the conference call after the company released earnings for the fiscal third quarter ended Nov. 3. VMware reported revenue of $1.98 billion, an increase of 11% over the same period last year. Net income grew to $443 million from $319 million a year ago.