Tag Archives: servers

USBAnywhere vulnerabilities put Supermicro servers at risk

Security researchers discovered a set of vulnerabilities in Supermicro servers that could allow threat actors to remotely attack systems as if they had physical access to the USB ports.

Researchers at Eclypsium, based in Beaverton, Ore., discovered flaws in the baseboard management controllers (BMCs) of Supermicro servers and dubbed the set of issues “USBAnywhere.” The researchers said authentication issues put servers at risk because “BMCs are intended to allow administrators to perform out-of-band management of a server, and as a result are highly privileged components.

“The problem stems from several issues in the way that BMCs on Supermicro X9, X10 and X11 platforms implement virtual media, an ability to remotely connect a disk image as a virtual USB CD-ROM or floppy drive. When accessed remotely, the virtual media service allows plaintext authentication, sends most traffic unencrypted, uses a weak encryption algorithm for the rest, and is susceptible to an authentication bypass,” the researchers wrote in a blog post. “These issues allow an attacker to easily gain access to a server, either by capturing a legitimate user’s authentication packet, using default credentials, and in some cases, without any credentials at all.”

The USBAnywhere flaws make it so the virtual USB drive acts in the same way a physical USB would, meaning an attacker could load a new operating system image, deploy malware or disable the target device. However, the researchers noted the attacks would be possible on systems where the BMCs are directly exposed to the internet or if an attacker already has access to a corporate network.

Rick Altherr, principal engineer at Eclypsium, told SearchSecurity, “BMCs are one of the most privileged components on modern servers. Compromise of a BMC practically guarantees compromise of the host system as well.”

Eclypsium said there are currently “at least 47,000 systems with their BMCs exposed to the internet and using the relevant protocol.” These systems would be at additional risk because BMCs are rarely powered off and the authentication bypass vulnerability can persist unless the system is turned off or loses power.

Altherr said he found the USBAnywhere vulnerabilities because he “was curious how virtual media was implemented across various BMC implementations,” but Eclypsium found that only Supermicro systems were affected.

According to the blog post, Eclypsium reported the USBAnywhere flaws to Supermicro on June 19 and provided additional information on July 9, but Supermicro did not acknowledge the reports until July 29.

“Supermicro engaged with Eclypsium to understand the vulnerabilities and develop fixes. Supermicro was responsive throughout and worked to coordinate availability of firmware updates to coincide with public disclosure,” Altherr said. “While there is always room for improvement, Supermicro responded in a way that produced an amicable outcome for all involved.”

Altherr added that customers should “treat BMCs as a vulnerable device. Put them on an isolated network and restrict access to only IT staff that need to interact with them.”

Supermicro noted in its security advisory that isolating BMCs from the internet would reduce the risk to USBAnywhere but not eliminate the threat entirely . Firmware updates are currently available for affected Supermicro systems, and in addition to updating, Supermicro advised users to disable virtual media by blocking TCP port 623.

Go to Original Article
Author:

Try these PowerShell networking commands to stay connected

While it would be nice if they did, servers don’t magically stay online on their own.

Servers go offline for a lot of reasons; it’s your job to find a way to determine network connectivity to these servers quickly and easily. You can use PowerShell networking commands, such as the Test-Connection and Test-NetConnection cmdlets to help.

The problem with ping

For quite some time, system administrators used ping to test network connectivity. This little utility sends an Internet Control Message Protocol message request to an endpoint and listens for an ICMP reply.

ping test
The ping utility runs a fairly simple test to check for a response from a host.

Because ping only tests ICMP, this limits its effectiveness to fully test a connection. Another caveat: The Windows firewall blocks ICMP requests by default. If the ICMP request doesn’t reach the server in question, you’ll get a false negative which makes ping results irrelevant.

The Test-Connection cmdlet offers a deeper look

We need a better way to test server network connectivity, so let’s use PowerShell instead of ping. The Test-Connection cmdlet also sends ICMP packets but it uses Windows Management Instrumentation which gives us more granular results. While ping returns text-based output, the Test-Connection cmdlet returns a Win32_PingStatus object which contains a lot of useful information.

The Test-Connection command has a few different parameters you can use to tailor your query to your liking, such as changing the buffer size and defining the number of seconds between the pings. The output is the same but the request is a little different.

Test-Connection www.google.com -Count 2 -BufferSize 128 -Delay 3

You can use Test-Connection to check on remote computers and ping a remote computer as well, provided you have access to those machines. The command below connects to the SRV1 and SRV2 computers and sends ICMP requests from those computers to www.google.com:

Test-Connection -Source 'SRV2', 'SRV1' -ComputerName 'www.google.com'

Source Destination IPV4Address IPV6Address
Bytes Time(ms)

------ ----------- ----------- -----------
----- --------

SRV2 google.com 172.217.7.174
32 5

SRV2 google.com 172.217.7.174
32 5

SRV2 google.com 172.217.7.174
32 6

SRV2 google.com 172.217.7.174
32 5

SRV1 google.com 172.217.7.174
32 5

SRV1 google.com 172.217.7.174
32 5

SRV1 google.com 172.217.7.174
32 5

SRV1 google.com 172.217.7.174
32 5

If the output is too verbose, and you just want a simple result, use the Quiet parameter.

Test-Connection -ComputerName google.com -Quiet
True

For more advanced network checks, try the Test-NetConnection cmdlet

If simple ICMP requests aren’t enough to test network connectivity, PowerShell also provides the Test-NetConnection cmdlet. This cmdlet is the successor to Test-Connection and goes beyond ICMP to check network connectivity.

For basic use, Test-NetConnection just needs a value for the ComputerName parameter and will mimic Test-Connection‘s behavior.

Test-NetConnection -ComputerName www.google.com

ComputerName : www.google.com
RemoteAddress : 172.217.9.68
InterfaceAlias : Ethernet 2
SourceAddress : X.X.X.X
PingSucceeded : True
PingReplyDetails (RTT) : 34 ms

Test-NetConnection has advanced capabilities and can test for open ports. The example below will check to see if port 80 is open:

Test-NetConnection -ComputerName www.google.com -Port 80

ComputerName : google.com
RemoteAddress : 172.217.5.238
RemotePort : 80
InterfaceAlias : Ethernet 2
SourceAddress : X.X.X.X
TcpTestSucceeded : True

The boolean TcpTestSucceeded returns True to indicate port 80 is open.

We can also use the TraceRoute parameter with the Test-NetConnection cmdlet to check the progress of packets to the destination address.

Test-NetConnection -ComputerName google.com -TraceRoute

ComputerName : google.com
RemoteAddress : 172.217.5.238
InterfaceAlias : Ethernet 2
SourceAddress : X.X.X.X
PingSucceeded : True
PingReplyDetails (RTT) : 44 ms
TraceRoute : 192.168.86.1
192.168.0.1
142.254.146.117
74.128.4.113
65.29.30.36
65.189.140.166
66.109.6.66
66.109.6.30
107.14.17.204
216.6.87.149
72.14.198.28
108.170.240.97
216.239.54.125
172.217.5.238

If you dig into the help for the Test-NetConnection cmdlet, you’ll find it has quite a few parameters to test many different situations.

Go to Original Article
Author:

IBM Power9 bulks up for AI workloads

The latest proprietary Power servers from IBM, armed by the long-awaited IBM Power9 processors, look for relevance among next-generation enterprise workloads, but the company will need some help from its friends to take on its biggest market challenger.

IBM emphasizes increased speed and bandwidth with its AC922 Power Systems to better take on high-performance computing tasks, such as building models for AI and machine learning training. The company said it plans to pursue mainstream commercial applications, such as building supply chains and medical diagnostics, but those broader-based opportunities may take longer to materialize.

“Most big enterprises are doing research and development on machine learning, with some even deploying such projects in niche areas,” said Patrick Moorhead, president and principal analyst at Moor Insights & Strategy. “But it will be 12 to 18 months before enterprises can even start driving serious volume in that space.”

The IBM Power9-based systems’ best chance for short-term commercial success is at the high end of the market.

“Power9 as a platform for AI is the focus over the next year or two,” said Charles King, principal analyst at Pund-IT Research Inc. “We are still a ways from seeing this sort of technology come down further into the commercial markets.”

Power9 as a platform for AI is the focus over the next year or two.
Charles Kingprincipal analyst, Pund-IT Research Inc.

But IBM may need to rely on its most important business partner and customer to drive the Power9’s commercial acceptance.

Google, a co-founder, along with IBM and Nvidia, of the OpenPower Foundation, contributed work around Power8 and ported over its applications to work with IBM’s Power-based systems. Google executives have declined to say how the company would deploy the Power9 internally and for what applications, but broadly deploying the IBM Power9 processor in servers for its data centers could seed confidence among corporate users, Moorhead said.

“To gain share at the macro level they need a Google deployment,” he said. “This could inspire others to deploy Power9 who are actually running large amounts of their production workloads.”

Under the hood of Power9

At the heart of the IBM AC922 system’s architecture are PCI-Express 4.0, Nvidia’s NVLink 2.0 and OpenCAPI, which together improve speed and bandwidth, according to the company. The NVLink 2.0, developed jointly by IBM and Nvidia, is claimed to transport data between the IBM Power9 CPU and Nvidia’s GPU seven to 10 times faster than an earlier version of the technology. The systems are also tuned to take advantage of popular AI frameworks: TensorFlow, a Google-developed open source software library for numerical computation using data flow graphs; Chainer, a framework supporting neural networks; and Caffe, a deep learning framework developed by Berkeley AI Research.

These “accelerators” are part of the IBM Power9 evolving hardware architecture, and are designed to solidify the system’s competitive footing in the cloud computing market.

“We have seen how aggressive the compute requirements have grown in the Linux space, especially as AI workloads were added to the mix,” said Stefanie Chiras, vice president of IBM’s Power Systems. “It now requires a different level of infrastructure underneath to support that level of data transport.”

IBM faces off with Intel

Some of IBM’s server competitors have pledged to deliver systems built to handle AI workloads, and some said they believe Intel will be Big Blue’s most serious competitor. Intel unveiled its AI processor called Nervana late last year and promised a finished product by the end of this year.

Intel’s advantage in the budding competition for AI processors is the overwhelming market share of its server-based Xeon processors, compared to that of proprietary chips such as IBM. Nervana could prove a formidable competitor to IBM in the AI market, but the Power9 with its accompanying accelerator technologies has the edge right now, Moorhead said.

“Intel will point out they have about 95% of the processors and their Nervana accelerator, but IBM is the only one out there with NVLink that has the highest bandwidth connection you can have between a CPU and GPU,” Moorhead said. “Intel would have to significantly change its architecture to support something like NVLink, and they won’t do that any time soon.”

Getting to know serverless computing architecture

Keith Townsend, writing in The CTO Advisor, reminded engineers that servers still exist in a serverless computing architecture. According to Townsend, servers in a serverless computing architecture are a throwback to a time when developing distributed applications depended on servers as the backbone of infrastructure.

Serverless computing architecture offered through services such as Lambda, Google Compute Cloud Functions or Azure is to abstract infrastructure and support application developers writing scripts within repositories hosted by these systems. However, the overreaching goal is to reduce costs for IT teams by increasing the efficiency of running code. According to Townsend, in a server-centric setup, organizations can dedicate an entire server instance, waiting for an event to trigger a software function. Services such as Lambda speed up operations, cutting costs in the fast-paced world of cloud computing.

Read more of Townsend’s thoughts on serverless computing architecture.

Efficacy in endpoint security

Doug Cahill, an analyst at Enterprise Strategy Group in Milford, Mass., said that efficiency and efficacy can no longer be mutually exclusive in modern enterprise IT. Many organizations face challenges with stopping threats but are not able to do much extra work. Recent ESG research, focused on efficacy, indicates that organizations experience poor efficacy as a result of antivirus software that is unable to detect and block new endpoint threats. Other issues included alert fatigue, re-imaging, a lack of integration between tools and slow endpoint agents.

According to Cahill, organizations are responding with training, incremental investment, getting security teams more involved and adding layers of extra controls. “For many, as the research results scream, the requisite design center can be summed up as ‘efficient efficacy,’ the need for new endpoint security solutions from established and emerging brands to detect and prevent a range of attacks without imposing operational overheads that disrupt the business,” Cahill said.

Dig deeper into Cahill’s thoughts on endpoint security.

Use cases for bidirectional forwarding

Ivan Pepelnjak, blogging in IP Space, answered a question about bidirectional forwarding (BFD). A network professional asked if BFD would still be used in the case of a direct router to router physical link without Layer 2 transport in the middle as a way to detect a software failure. According to Pepelnjak, the answer is, “it depends,” and can be determined by figuring out what problem needs to be solved. As alternatives, an engineer could try detecting link failures at physical layer, data link layer or network layer (using BFD and routing protocols).

In most cases, physical layer detection is the easiest, although it can only detect physical failures such as a broken cable. Data link detection cannot spot end-to-end failures on a Layer 2 network. “BFD is perfectly positioned to solve the network path element failure detection challenge. It sits at the waist of the protocol hourglass, is standardized, and simple enough to be easy to implement,” he said. Occasionally, using BFD timers can trigger false positives. He added that many network elements include BFD on line cards and central CPU, which becomes an important consideration if packet forwarding is being done on the same general CPU.

Explore more of Pepelnjak’s thoughts on BFD.

How to Perform Hyper-V Storage Migration

New servers? New SAN? Trying out hyper-convergence? Upgrading to Hyper-V 2016? Any number of conditions might prompt you to move your Hyper-V virtual machine’s storage to another location. Let’s look at the technologies that enable such moves.

An Overview of Hyper-V Migration Options

Hyper-V offers numerous migration options. Each has its own distinctive features. Unfortunately, we in the community often muck things up by using incorrect and confusing terminology. So, let’s briefly walk through the migration types that Hyper-V offers:

  • Quick migration: Cluster-based virtual machine migration that involves placing a virtual machine into a saved state, transferring ownership to another node in the same cluster, and resuming the virtual machine. A quick migration does not involve moving anything that most of us consider storage.
  • Live migration: Cluster-based virtual machine migration that involves transferring the active state of a running virtual machine to another node in the same cluster. A Live Migration does not involve moving anything that most of us consider storage.
  • Storage migration: Any technique that utilizes the Hyper-V management service to relocate any file-based component that belongs to a virtual machine. This article focuses on this migration type, so I won’t expand any of those thoughts in this list.
  • Shared Nothing Live Migration: Hyper-V migration technique between two hosts that does not involve clustering. It may or may not include a storage migration. The virtual machine might or might not be running. However, this migration type always includes ownership transfer from one host to another.

It Isn’t Called Storage Live Migration

I have always called this operation “Storage Live Migration”. I know lots of other authors call it “Storage Live Migration”. But, Microsoft does not call it “Storage Live Migration”. They just call it “Storage Migration”. The closest thing that I can find to “Storage Live Migration” in anything from Microsoft is a 2012 TechEd recording by Benjamin Armstrong. The title of that presentation includes the phrase “Live Storage Migration”, but I can’t determine if the “Live” just modifies “Storage Migration” or if Ben uses it as part of the technology name. I suppose I could listen to the entire hour and a half presentation, but I’m lazy. I’m sure that it’s a great presentation, if anyone wants to listen and report back.

Anyway, does it matter? I don’t really think so. I’m certainly not going to correct anyone that uses that phrase. However, the virtual machine does not necessarily need to be live. We use the same tools and commands to move a virtual machine’s storage whether it’s online or offline. So, “Storage Migration” will always be a correct term. “Storage Live Migration”, not so much. However, we use the term “Shared Nothing Live Migration” for virtual machines that are turned off, so we can’t claim any consistency.

What Can Be Moved with Hyper-V Storage Migration?

When we talk about virtual machine storage, most people think of the places where the guest operating system stores its data. That certainly comprises the physical bulk of virtual machine storage. However, it’s also only one bullet point on a list of multiple components that form a virtual machine.

Independently, you can move any of these virtual machine items:

  • The virtual machine’s core files (configuration in xml or .vmcx, .bin, .vsv, etc.)
  • The virtual machine’s checkpoints (essentially the same items as the preceding bullet point, but for the checkpoint(s) instead of the active virtual machine)
  • The virtual machine’s second-level paging file location. I have not tested to see if it will move a VM with active second-level paging files, but I have no reason to believe that it wouldn’t
  • Virtual hard disks attached to a virtual machine
  • ISO images attached to a virtual machine

We most commonly move all of these things together. Hyper-V doesn’t require that, though. Also, we can move all of these things in the same operation but distribute them to different destinations.

What Can’t Be Moved with Hyper-V Storage Migration?

In terms of storage, we can move everything related to a virtual machine. But, we can’t move the VM’s active, running state with Storage Migration. Storage Migration is commonly partnered with a Live Migration in the operation that we call “Shared Nothing Live Migration”. To avoid getting bogged down in implementation details that are more academic than practical, just understand one thing: when you pick the option to move the virtual machine’s storage, you are not changing which Hyper-V host owns and runs the virtual machine.

More importantly, you can’t use any Microsoft tool-based technique to separate a differencing disk from its parent. So, if you have an AVHDX (differencing disk created by the checkpointing mechanism) and you want to move it away from its source VHDX, Storage Migration will not do it. If you instruct Storage Migration to move the AVHDX, the entire disk chain goes along for the ride.

Uses for Hyper-V Storage Migration

Out of all the migration types, storage migration has the most applications and special conditions. For instance, Storage Migration is the only Hyper-V migration type that does not always require domain membership. Granted, the one exception to the domain membership rule won’t be very satisfying for people that insist on leaving their Hyper-V hosts in insecure workgroup mode, but I’m not here to please those people. I’m here to talk about the nuances of Storage Migration.

Local Relocation

Let’s start with the simplest usage: relocation of local VM storage. Some situations in this category:

  • You left VMs in the default “C:ProgramDataMicrosoftWindowsHyper-V” and/or “C:UsersPublicDocumentsHyper-VVirtual Hard Disks” locations and you don’t like it
  • You added new internal storage as a separate volume and want to re-distribute your VMs
  • You have storage speed tiers but no active management layer
  • You don’t like the way your VMs’ files are laid out
  • You want to defragment VM storage space. It’s a waste of time, but it works.

Network Relocation

With so many ways to do network storage, it’s nearly a given that we’ll all need to move a VHDX across ours at some point. Some situations:

  • You’re migrating from local storage to network storage
  • You’re replacing a SAN or NAS and need to relocate your VMs
  • You’ve expanded your network storage and want to redistribute your VMs

Most of the reasons listed under “Local Relocation” can also apply to network relocation.

Cluster Relocation

We can’t always build our clusters perfectly from the beginning. For the most part, a cluster’s relocation needs list will look like the local and network lists above. A few others:

  • Your cluster has new Cluster Shared Volumes that you want to expand into
  • Existing Cluster Shared Volumes do not have a data distribution that does not balance well. Remember that data access from a CSV owner node is slightly faster than from a non-owner node

The reasons matter less than the tools when you’re talking about clusters. You can’t use the same tools and techniques to move virtual machines that are protected by Failover Clustering under Hyper-V as you use for non-clustered VMs.

Turning the VM Off Makes a Difference for Storage Migration

You can perform a very simple experiment: perform a Storage Migration for a virtual machine while it’s on, then turn it off and migrate it back. The virtual machine will move much more quickly while it’s off. This behavior can be explained in one word: synchronization.

When the virtual machine is off, a Storage Migration is essentially a monitored file copy. The ability of the constituent parts to move bits from source to destination sets the pace of the move. When the virtual machine is on, all of the rules change. The migration is subjected to these constraints:

  • The virtual machine’s operating system must remain responsive
  • Writes must be properly captured
  • Reads must occur from the most appropriate source

Even if the guest operating does not experience much activity during the move, that condition cannot be taken as a constant. In other words, Hyper-V needs to be ready for it to start demanding lots of I/O at any time.

So, the Storage Migration of a running virtual machine will always take longer than the Storage Migration of a virtual machine in an off or saved state. You can choose the convenience of an online migration or the speed of an offline migration.

Note: You can usually change a virtual machine’s power state during a Storage Migration. It’s less likely to work if you are moving across hosts.

How to Perform Hyper-V Storage Migration with PowerShell

The nice thing about using PowerShell for Storage Migration: it works for all Storage Migration types. The bad thing about using PowerShell for Storage Migration: it can be difficult to get all of the pieces right.

The primary cmdlet to use is Move-VMStorage. If you will be performing a Shared Nothing Live Migration, you can also use Move-VM. The parts of Move-VM that pertain to storage match Move-VMStorage. Move-VM has uses, requirements, and limitations that don’t pertain to the topic of this article, so I won’t cover Move-VM here.

A Basic Storage Migration in PowerShell

Let’s start with an easy one. Use this when you just want all of a VM’s files to be in one place:

This will move the virtual machine named testvm so that all of its components reside under the C:LocalVMs folder. That means:

  • The configuration files will be placed in C:LocalVMsVirtual Machines
  • The checkpoint files will be placed in C:LocalVMsSnapshots
  • The VHDXs will be placed in C:LocalVMsVirtual Hard Disks
  • Depending on your version, an UndoLog Configuration folder will be created if it doesn’t already exist. The folder is meant to contain Hyper-V Replica files. It may be created even for virtual machines that aren’t being replicated.

Complex Storage Migrations in PowerShell

For more complicated move scenarios, you won’t use the DestinationStoragePath parameter. You’ll use one or more of the individual component parameters. Choose from the following:

  • VirtualMachinePath: Where to place the VM’s configuration files.
  • SnapshotFilePath: Where to place the VM’s checkpoint files (again, NOT the AVHDXs!)
  • SmartPagingFilePath: Where to place the VM’s smart paging files
  • Vhds: An array of hash tables that indicate where to place individual VHD/X files.

Some notes on these items:

  • You are not required to use all of these parameters. If you do not specify a parameter, then its related component is left alone. Meaning, it doesn’t get moved at all.
  • If you’re trying to use this to get away from those auto-created Virtual Machines and Snapshots folders, it doesn’t work. They’ll always be created as sub-folders of whatever you type in.
  • It doesn’t auto-create a Virtual Hard Disks folder.
  • If you were curious whether or not you needed to specify those auto-created subfolders, the answer is: no. Move-VMStorage will always create them for you (unless they already exist).
  • The VHDs hash table is the hardest part of this whole thing. I’m usually a PowerShell-first kind of guy, but even I tend to go to the GUI for Storage Migrations.

The following will move all components except VHDs, which I’ll tackle in the next section:

Move-VMStorage’s Array of Hash Tables for VHDs

The three …FilePath parameters are easy: just specify the path. The Vhds parameter is tougher. It is one or more hash tables inside an array.

First, the hash tables. A hash table is a custom object that looks like an array, but each entry has a unique name. The hash tables that Vhds expects have a SourceFilePath entry and a DestinationFilePath entry. Each must be fully-qualified for a file. A hash table is contained like this: @{ }. The name of an entry and its value are joined with an =. Entries are separated by a ; So, if you want to move the VHDX named svtest.vhdx from \svstoreVMs to C:LocalVMstestvm, you’d use this hash table:

Reading that, you might ask (quite logically): “Can I change the name of the VHDX file when I move it?” The answer: No, you cannot. So, why then do you need to enter the full name of the destination file? I don’t know!

Next, the arrays. An array is bounded by @( ). Its entries are separated by commas. So, to move two VHDXs, you would do something like this:

I broke that onto multiple lines for legibility. You can enter it all on one line. Note where I used parenthesis and where I used curly braces.

Tip: To move a single VHDX file, you don’t need to do the entire array notation. You can use the first example with Vhds.

A Practical Move-VMStorage Example with Vhds

If you’re looking at all that and wondering why you’d ever use PowerShell for such a thing, I have the perfect answer: scripting. Don’t do this by hand. Use it to move lots of VMs in one fell swoop. If you want to see a plain example of the Vhds parameter in action, the Get-Help examples show one. I’ve got a more practical script in mind.

The following would move all VMs on the host. All of their config, checkpoint, and second-level paging files will be placed on a share named “\vmstoreslowstorage”. All of their VHDXs will be placed on a share named “\vmstorefaststorage”. We will have PowerShell deal with the source paths and file names.

I used splatting for the parameters for two reasons: 1, legibility. 2, to handle VMs without any virtual hard disks.

How to Perform Hyper-V Storage Migration with Hyper-V Manager

Hyper-V Manager can only be used for non-clustered virtual machines. It utilizes a wizard format. To use it to move a virtual machine’s storage:

  1. Right-click on the virtual machine and click Move.
  2. Click Next on the introductory page.
  3. Change the selection to Move the virtual machine’s storage (the same storage options would be available if you moved the VM’s ownership, but that’s not part of this article)
    movevm_hvmwiz1
  4. Choose how to perform the move. You can move everything to the same location, you can move everything to different locations, or you can move only the virtual hard disks.
    movevm_hvmwiz2
  5. What screens you see next will depend on what you chose. We’ll cover each branch.

If you opt to move everything to one location, the wizard will show you this simple page:

movevm_hvmwiz3

If you choose the option to Move the virtual machine’s data to different locations, you will first see this screen:

movevm_hvmwiz4

For every item that you check, you will be given a separate screen where you indicate the desired location for that item. The wizard uses the same screen for these items as it does for the hard-disks only option. I’ll show its screen shot next.

If you choose Move only the virtual machine’s virtual hard disks, then you will be given a sequence of screens where you instruct it where to move the files. These are the same screens used for the individual components from the previous selection:

movevm_hvmwiz5

After you make your selections, you’ll be shown a summary screen where you can click Finish to perform the move:

movevm_hvmwiz6

How to Perform Hyper-V Storage Migration with Failover Cluster Manager

Failover Cluster Manager uses a slick single-screen interface to move storage for cluster virtual machines. To access it, simply right-click a virtual machine, hover over Move, and click Virtual Machine Storage. You’ll see the following screen:

movecm_fcm1

If you just want to move the whole thing to one of the display Cluster Shared Volumes, just drag and drop it down to that CSV in the Cluster Storage heading at the lower left. You can drag and drop individual items or the entire VM. The Destination Folder Path will be populated accordingly.

As you can see in mine, I have all of the components except the VHD on an SMB share. I want to move the VHD to be with the rest. To get a share to show up, click the Add Share button. You’ll get this dialog:

movevm_fcmaddshare

The share will populate underneath the CSVs in the lower left. Now, I can drag and drop that file to the share. View the differences:

movecm_fcm2

Once you have the dialog the way that you like it, click Start.

Office 365 compliance issues deserve your attention

It’s no longer enough to evaluate email servers on just the basic features. Cyberattacks and data leaks are on the rise, and the explosive growth of data means IT admins must reconsider security protections and compliance concerns in their email servers.

Those worries are acute for a business considering a move from an on-premises platform to Microsoft Office 365. Admins should be aware of the potential challenges that await once their company’s data migrates to the cloud, such as Office 365 compliance.

Businesses routinely accumulate vast quantities of data, and that increases regulatory pressures to protect digital assets. Exchange admins were accustomed to managing the security and compliance of just one workload on premises; in the cloud, the number of workloads mushrooms, and the list of Office 365 services that contain company data includes SharePoint, Skype and OneDrive. With Office 365, IT admins are responsible for data governance, and they need to consider new areas of security and compliance.

Microsoft invests $1 billion annually in cybersecurity research and development. The company regularly introduces new features and enhancements for Office 365 security. IT admins can use these modern accoutrements as ammunition to convince their business that it is worth the investment. But before making the move, administrators must address important questions about Office 365 compliance and security.

Navigate Office 365 compliance aspects

With Office 365, IT administrators have one common information protection layer.

Microsoft moved away from a decentralized administration model for on-premises Exchange, where each workload in the platform had its own security and compliance management console. There is now one centralized portal where admins can see all aspects of Office 365 compliance and security.

This portal offers admins a single place to set up and configure the policies related to Office 365 areas, such as SharePoint, OneDrive and email messages. Admins can also use the Office 365 Admin mobile app to access the management console and make adjustments on the go.

Make a data governance plan

As an important preliminary step, many early Office 365 adopters advise IT admins to put together a data governance plan. You’ll want all the policies needed to meet the business requirements in place before the data migrates. The Microsoft FastTrack team or third-party vendors can assist.

With on-premises Exchange, admins’ only compliance concern is with email messages. But for Office 365 compliance, admins must consider data elsewhere, such as Skype for Business, files and SharePoint content, that Microsoft’s data centers manage and store. IT administrators need to expand the scope of their compliance and security policies beyond Exchange and set policies for other workloads. Office 365 offers flexibility and enables some policies to be applied to multiple workloads; this eliminates the duplication of work when creating specific compliance policies.

IT admins are used to digging through troves of user activities and system logs to identify compliance and security issues. Office 365 eases that burden and offers incident and auditing capabilities, such as searchable audit logs, that are easy to use and navigate. IT administrators can now receive alerts on data deletions, departure of sensitive content to external users, or when a user signs in from a risky IP address.

Know what else is covered

In addition to features that protect and monitor compliance in services such as SharePoint, OneDrive and Skype for Business, Microsoft announced in 2017 it will extend that ability to some external data as well. The Advanced Data Governance feature in Office 365 enables administrators to ingest external data from places such as Facebook, Bloomberg, Twitter and LinkedIn; store it within Office 365 cloud storage; perform searches; and apply compliance policies to it.

Intelligence-infused services are nothing new to Microsoft, which seems to recognize the importance of artificial intelligence and how it enables administrators to perform smarter searches and detect abnormal activities. Advanced Threat Protection, Advanced eDiscovery, automatic data classification, and Advanced Security Management use AI to assist with early detection, discovery and prevention.

Manage security needs quickly

An on-premises environment typically requires admins to spend time managing multiple security and compliance platforms. With Office 365, IT administrators have one common information protection layer; a centralized administration portal manages all security and compliance needs for cloud workloads.

Surprisingly, these security components don’t require much from IT, as the tools and intelligence services automate, detect and remedy many issues that admins traditionally handled manually. Not only is there a more comprehensive security layer, but IT admins have more time to efficiently adapt to external threats.

The base Office 365 packages do not include every security and compliance feature. Determine which features your business needs and whether they require licenses to enable advanced capabilities. While Office 365 E5 includes several advanced security and compliance features, there are others — such as advanced threat analytics and Azure Active Directory premium services — that Microsoft considers add-ons, which will cost extra.

As more businesses move their email servers to the cloud and adopt cloud-based workloads within Office 365, there is demand for better visibility and improved security. IT administrators recognize they must adjust their security and compliance practices. But that brings the challenge of relying on one vendor and trusting it with the data. So far Microsoft has taken appropriate steps to invest in its Office 365 compliance and security capabilities, and all IT administrators can do is implement the recommended services based on best practices and recommendations.