Windows 10 SDK Preview Build 18272 available now! – Windows Developer Blog

Today, we released a new Windows 10 Preview Build of the SDK to be used in conjunction with Windows 10 Insider Preview (Build 18272 or greater). The Preview SDK Build 18272 contains bug fixes and under development changes to the API surface area.
The Preview SDK can be downloaded from developer section on Windows Insider.
For feedback and updates to the known issues, please see the developer forum. For new developer feature requests, head over to our Windows Platform UserVoice.

This build works in conjunction with previously released SDKs and Visual Studio 2017. You can install this SDK and still also continue to submit your apps that target Windows 10 build 1803 or earlier to the Microsoft Store.
The Windows SDK will now formally only be supported by Visual Studio 2017 and greater. You can download the Visual Studio 2017 here.
This build of the Windows SDK will install on Windows 10 Insider Preview builds and supported Windows operating systems.
In order to assist with script access to the SDK, the ISO will also be able to be accessed through the following URL: https://go.microsoft.com/fwlink/?prd=11966&pver=1.0&plcid=0x409&clcid=0x409&ar=Flight&sar=Sdsurl&o1=18272 once the static URL is published.

Additions:

namespace Windows.ApplicationModel.Calls {
public sealed class PhoneLine {
PhoneLineBluetoothDetails BluetoothDetails { get; }
HResult EnableTextReply(bool value);
}
public sealed class PhoneLineBluetoothDetails
public enum PhoneLineTransport {
Bluetooth = 2,
}
}
namespace Windows.ApplicationModel.Calls.Background {
public enum PhoneIncomingCallDismissedReason
public sealed class PhoneIncomingCallDismissedTriggerDetails
public enum PhoneLineProperties : uint {
BluetoothDetails = (uint)512,
}
public enum PhoneTriggerType {
IncomingCallDismissed = 6,
}
}
namespace Windows.ApplicationModel.Calls.Provider {
public static class PhoneCallOriginManager {
public static bool IsSupported { get; }
}
}
namespace Windows.ApplicationModel.Resources.Core {
public sealed class ResourceCandidate {
ResourceCandidateKind Kind { get; }
}
public enum ResourceCandidateKind
}
namespace Windows.Globalization {
public sealed class CurrencyAmount
}
namespace Windows.Management.Deployment {
public enum AddPackageByAppInstallerOptions : uint {
ApplyToExistingPackages = (uint)512,
}
}
namespace Windows.Networking.Connectivity {
public enum NetworkAuthenticationType {
Wpa3 = 10,
Wpa3Sae = 11,
}
}
namespace Windows.Networking.NetworkOperators {
public sealed class ESim {
ESimDiscoverResult Discover();
ESimDiscoverResult Discover(string serverAddress, string matchingId);
IAsyncOperation DiscoverAsync();
IAsyncOperation DiscoverAsync(string serverAddress, string matchingId);
}
public sealed class ESimDiscoverEvent
public sealed class ESimDiscoverResult
public enum ESimDiscoverResultKind
}
namespace Windows.Security.DataProtection {
public enum UserDataAvailability
public sealed class UserDataAvailabilityStateChangedEventArgs
public sealed class UserDataBufferUnprotectResult
public enum UserDataBufferUnprotectStatus
public sealed class UserDataProtectionManager
public sealed class UserDataStorageItemProtectionInfo
public enum UserDataStorageItemProtectionStatus
}
namespace Windows.System {
public enum ProcessorArchitecture {
Arm64 = 12,
X86OnArm64 = 14,
}
}
namespace Windows.UI.Composition {
public interface IVisualElement
}
namespace Windows.UI.Composition.Interactions {
public class VisualInteractionSource : CompositionObject, ICompositionInteractionSource {
public static VisualInteractionSource CreateFromIVisualElement(IVisualElement source);
}
}
namespace Windows.UI.Input {
public class AttachableInputObject : IClosable
public sealed class InputActivationListener : AttachableInputObject
public sealed class InputActivationListenerActivationChangedEventArgs
public enum InputActivationState
}
namespace Windows.UI.Input.Preview {
public static class InputActivationListenerPreview
}
namespace Windows.UI.Input.Preview.Injection {
public enum InjectedInputButtonEvent
public sealed class InjectedInputButtonInfo
public enum InjectedInputButtonKind
public sealed class InputInjector {
void InjectButtonInput(IIterable input);
}
}
namespace Windows.UI.ViewManagement {
public sealed class ApplicationView {
ApplicationWindowPresenterKind AppliedPresenterKind { get; }
string PersistedStateName { get; }
public static IAsyncOperation ClearAllPersistedStateAsync();
public static IAsyncOperation ClearPersistedStateAsync(string value);
bool TrySetPersistedStateName(string value);
}
public sealed class UISettings {
bool AutoHideScrollBars { get; }
event TypedEventHandler AutoHideScrollBarsChanged;
}
public sealed class UISettingsAutoHideScrollBarsChangedEventArgs

namespace Windows.UI.Xaml {
public class ContentRoot
public sealed class ContentRootRasterizationScaleChangedEventArgs
public sealed class ContentRootSizeChangedEventArgs
public sealed class ContentRootVisibilityChangedEventArgs
public sealed class ContentRootVisibleBoundsChangedEventArgs
public class UIElement : DependencyObject, IAnimationObject {
Shadow Shadow { get; set; }
public static DependencyProperty ShadowProperty { get; }
}
public class UIElementWeakCollection : IIterable, IVector
}
namespace Windows.UI.Xaml.Controls {
public class ContentDialog : ContentControl {
ContentRoot AssociatedContentRoot { get; set; }
}
}
namespace Windows.UI.Xaml.Controls.Primitives {
public sealed class AppBarTemplateSettings : DependencyObject {
double NegativeCompactVerticalDelta { get; }
double NegativeHiddenVerticalDelta { get; }
double NegativeMinimalVerticalDelta { get; }
}
public sealed class CommandBarTemplateSettings : DependencyObject {
double OverflowContentCompactOpenUpDelta { get; }
double OverflowContentHiddenOpenUpDelta { get; }
double OverflowContentMinimalOpenUpDelta { get; }
}
public class FlyoutBase : DependencyObject {
ContentRoot AssociatedContentRoot { get; set; }
bool IsWindowed { get; }
public static DependencyProperty IsWindowedProperty { get; }
bool IsWindowedRequested { get; set; }
public static DependencyProperty IsWindowedRequestedProperty { get; }
}
public sealed class Popup : FrameworkElement {
ContentRoot AssociatedContentRoot { get; set; }
bool IsWindowed { get; }
public static DependencyProperty IsWindowedProperty { get; }
bool IsWindowedRequested { get; set; }
public static DependencyProperty IsWindowedRequestedProperty { get; }
bool ShouldMoveWithContentRoot { get; set; }
public static DependencyProperty ShouldMoveWithContentRootProperty { get; }
}
}
namespace Windows.UI.Xaml.Core.Direct {
public enum XamlPropertyIndex {
AppBarTemplateSettings_NegativeCompactVerticalDelta = 2367,
AppBarTemplateSettings_NegativeHiddenVerticalDelta = 2368,
AppBarTemplateSettings_NegativeMinimalVerticalDelta = 2369,
CommandBarTemplateSettings_OverflowContentCompactOpenUpDelta = 2370,
CommandBarTemplateSettings_OverflowContentHiddenOpenUpDelta = 2371,
CommandBarTemplateSettings_OverflowContentMinimalOpenUpDelta = 2372,
}
}
namespace Windows.UI.Xaml.Hosting {
public class DesktopWindowXamlSource : IClosable {
bool ProcessKeyboardAccelerator(VirtualKey key, VirtualKeyModifiers modifiers);
}
public sealed class ElementCompositionPreview {
public static UIElement GetApplicationWindowContent(ApplicationWindow applicationWindow);
public static void SetApplicationWindowContent(ApplicationWindow applicationWindow, UIElement xamlContent);
}
}
namespace Windows.UI.Xaml.Input {
public sealed class FocusManager {
public static UIElement FindNextFocusableElementInContentRoot(FocusNavigationDirection focusNavigationDirection, ContentRoot contentRoot);
public static UIElement FindNextFocusableElementInContentRoot(FocusNavigationDirection focusNavigationDirection, ContentRoot contentRoot, Rect hintRect);
public static object GetFocusedElement(ContentRoot contentRoot);
public static bool TryMoveFocusInContentRoot(FocusNavigationDirection focusNavigationDirection, ContentRoot contentRoot);
public static IAsyncOperation TryMoveFocusInContentRootAsync(FocusNavigationDirection focusNavigationDirection, ContentRoot contentRoot);
}
}
namespace Windows.UI.Xaml.Media {
public class Shadow : DependencyObject
public class ThemeShadow : Shadow
public sealed class VisualTreeHelper {
public static IVectorView GetOpenPopupsWithinContentRoot(ContentRoot contentRoot);
}
}
namespace Windows.UI.Xaml.Media.Animation {
public class GravityConnectedAnimationConfiguration : ConnectedAnimationConfiguration {
bool IsShadowEnabled { get; set; }
}
}
namespace Windows.Web.Http {
public sealed class HttpClient : IClosable, IStringable {
IAsyncOperationWithProgress TryDeleteAsync(Uri uri);
IAsyncOperationWithProgress TryGetAsync(Uri uri);
IAsyncOperationWithProgress TryGetAsync(Uri uri, HttpCompletionOption completionOption);
IAsyncOperationWithProgress TryGetBufferAsync(Uri uri);
IAsyncOperationWithProgress TryGetInputStreamAsync(Uri uri);
IAsyncOperationWithProgress TryGetStringAsync(Uri uri);
IAsyncOperationWithProgress TryPostAsync(Uri uri, IHttpContent content);
IAsyncOperationWithProgress TryPutAsync(Uri uri, IHttpContent content);
IAsyncOperationWithProgress TrySendRequestAsync(HttpRequestMessage request);
IAsyncOperationWithProgress TrySendRequestAsync(HttpRequestMessage request, HttpCompletionOption completionOption);
}
public sealed class HttpGetBufferResult : IClosable, IStringable
public sealed class HttpGetInputStreamResult : IClosable, IStringable
public sealed class HttpGetStringResult : IClosable, IStringable
public sealed class HttpRequestResult : IClosable, IStringable
}

MEF’s new SD-WAN standard too low-level, according to analysts

MEF this week released a draft version of its software-defined WAN technical specifications, along with software developer kits for its interoperability APIs. But some industry analysts said they consider the specs to be irrelevant.

The SD-WAN standard — 3.0 SD-WAN Service Attributes and Service Definition — is currently available only to MEF members, but the Los Angeles-based association expects to ratify and release the specification to the public in the first quarter of 2019.

MEF said in a statement its primary mission with the SD-WAN standard is to create common terminology for buying, selling, deploying and delivering SD-WAN services. The group claims a common language among users, service providers and vendors will help alleviate market confusion about which components and capabilities SD-WAN services should possess. Further, MEF said a collective framework can help “pave the way for SD-WAN services certification” to clarify which SD-WAN options meet the fundamental requirements.

With so many variations of SD-WAN deployment, common terminology could prove useful. But an SD-WAN standard, especially from an organization like MEF, is unlikely to stick in the industry, according to Lee Doyle, principal analyst at Doyle Research.

“You can argue it’s nice to have,” Doyle said. “But they’re low-level standards for the interfaces.”

Industry analyst and CIMI Corp. President Tom Nolle went a step further, saying “MEF is wasting their time” in issuing the specifications. “The specs are too low-level to actually impact interworking among implementations.”

Service providers participating in the SD-WAN standard work include AT&T, CenturyLink, Comcast Business, Orange Business Services, Telia and Verizon.

MEF makes LSO Sonata API specs, SDK available

MEF also released the specifications and software developer kits (SDKs) for its Lifecycle Service Orchestration (LSO) Sonata APIs. The APIs are available in a developer release for serviceability, product inventory, quoting and ordering, according to MEF.

The Sonata API specifications work within MEF’s LSO Reference Architecture and Framework, which includes Carrier Ethernet, IP, SD-WAN, optical transport, security and other virtualized services. The Sonata API focuses on automating interprovider orchestration of various services within the architecture.

“The full suite of planned LSO Sonata APIs will deal with serviceability, product inventory, quoting, ordering, trouble ticketing, contracts and billing,” MEF said. Each SDK — available on GitHub — includes an API developer guide, a Swagger data model and other essential building blocks.

MEF also said it is continuing to pursue certification for its Sonata APIs, a process that includes a pilot program for member testing.

For Sale – Acer Predator XB271H Gaming Monitor 170hz G-sync 1ms

Amazing 27″ monitor for sale.

This thing is an absolute beast, however I’m upgrading to 4k and need to sell.

It’s really hard for me to part with this monitor as it’s been the central part of my build.

Collection only or if you pay for courier I will get it delivered.

Price and currency: 400
Delivery: Delivery cost is not included
Payment method: Bank Transfer or Paypal Gift
Location: Nottingham
Advertised elsewhere?: Not advertised elsewhere
Prefer goods collected?: I prefer the goods to be collected

______________________________________________________
This message is automatically inserted in all classifieds forum threads.
By replying to this thread you agree to abide by the trading rules detailed here.
Please be advised, all buyers and sellers should satisfy themselves that the other party is genuine by providing the following via private conversation to each other after negotiations are complete and prior to dispatching goods and making payment:

  • Landline telephone number. Make a call to check out the area code and number are correct, too
  • Name and address including postcode
  • Valid e-mail address

DO NOT proceed with a deal until you are completely satisfied with all details being correct. It’s in your best interest to check out these details yourself.

What To Do When Live Migration Fails On Hosts With The Same CPU

Symptom: You attempt to Live Migrate a Hyper-V virtual machine to a host that has the same CPU as the source, but Hyper-V complains about incompatibilities between the two CPUs. Additionally, Live Migration between these two hosts likely worked in the past.

The event ID is 21502. The full text of the error message reads:

“Live migration of ‘Virtual Machine VMName‘ failed.

Virtual machine migration operation for ‘VMNAME‘ failed at migration destination ‘DESTINATION_HOST‘. (Virtual machine ID VMID)

The virtual machine ‘VMNAME‘ is using processor-specific features not supported on physical computer ‘DESTINATION_HOST‘. To allow for the migration of this virtual machine to physical computers with different processors, modify the virtual machine settings to limit the processor features used by the virtual machine. (Virtual machine ID VMID)

Live Migration of 'Virtual Machine svdcadmt' failed

Why Live Migration Might Fail Across Hosts with the Same CPU

Ordinarily, this problem surfaces when hosts use CPUs that expose different feature sets — just like the error message states. You can use a tool such as CPU-Z to identify those. We have an article that talks about the effect of CPU feature differences on Hyper-V.

In this discussion, we only want to talk about cases where the CPUs have the same feature set. They have the same feature sets; CPU identifiers reveal the same family, model, stepping, and revision numbers. And yet, Hyper-V says that they need compatibility mode.

Cause 1: Spectre Mitigations

The Spectre mitigations make enough of a change to prevent Live Migrations, but that might not be obvious to anyone that doesn’t follow BIOS update notes. To see if that be affecting you, check the BIOS update level on the hosts. You can do that quickly by asking PowerShell to check WMI:  Get-WmiObject -ClassName Win32_BIOSGet-CimInstance -ClassName Win32_BIOS, or, at its simplest,  gwmi win32_bios:

A difference in BIOS versions might tell the entire story if you look at their release notes. When updates were released to address the first wave of Spectre-class CPU vulnerabilities, they included microcode that altered the way that CPUs process instructions. So, the CPU’s feature sets didn’t change per se, but its functionality did.

Spectre Updates to BIOS Don’t Always Cause Live Migration Failures

You may have had a few systems that received the hardware updates that did not prevent Live Migration. There’s quite a bit going on in all of these updates that amount to a lot of moving parts:

  • These updates require a cold boot of the physical system to fully apply. Most modern systems from larger manufacturers have the ability to self-cold boot after a BIOS update, but not all do. It is possible that you have a partially-applied update waiting for a cold boot.
  • These updates require the host operating system to be fully patched. Your host(s) might be awaiting installation or post-patch reboot.
  • These updates require the guests to be cold-booted from a patched host. Some clusters have been running so well for so long that we have no idea when the last time any given guest was cold booted. If it wasn’t from a patched host, then it won’t have the mitigations and won’t care if it moves to an unpatched host. They’ll also happily moved back to an unpatched host.
  • You may have registry settings that block the mitigations, which would have a side effect of preventing them from interfering with Live Migration.

I have found only one “foolproof” combination that always prevents Live Migration:

  • Source host fully patched — BIOS and Windows
  • Virtual machine operating system fully patched
  • Registry settings allow mitigation for the host, guest, and virtual machine
  • The guest was cold booted from the source host
  • Destination host is missing at least the proper BIOS update

Because Live Migration will work more often than not, it’s not easy to predict when a Live Migration will succeed across mismatched hosts.

Correcting a Live Migration Block Caused by Spectre

Your first, best choice is to bring all hosts, host operating systems, and guest operating systems up to date and ensure that no registry settings impede their application. Performance and downtime concerns are real, of course, but not as great as the threat of a compromise.  Furthermore, if you’re in the position where this article applies to you, then you already have at least one host up to date. Might as well put it to use.

You have a number of paths to solve this problem. I chose the route that would result in the least disruptions. To that end:

  • I patched all of the guests but did not allow them to reboot
  • I brought one host up to current BIOS and patch levels
  • I filled it up with all the VMs that it could take; in two node clusters, that should mean all of the guests
  • I performed a full shut down and startup of those VMs; that allowed them to apply the patch and utilize the host’s update status in one boot cycle. It also locked them to that node, so watch out for that.
  • I moved through the remaining hosts in the cluster. In larger clusters, that also meant opportunistically cold booting additional VMs

That way, each virtual machine was cold booted only one time and I did not run into any Live Migration failures. Make certain that you check your work at every point before moving on — there is much work to be done here and a missed step will likely result in additional reboots.

Note: Enabling the CPU compatibility feature will probably not help you overcome the Live Migration problem — but it might. It does not appear to affect everyone identically, like due to fundamental differences in different processor generations.

Automating the Spectre Mitigation Rollout

I opted not to script this process. The normal patching automation processes cover reboots, not cold boots, and working up a foolproof script to properly address everything that might occur did not seem worth the effort to me. These are disruptive patches anyway, so I wanted to be hands-on where possible. If patch processes like this become a regular event (and it seems that it might), I may rethink that. If I had dozens or more systems to cope with, I would have scripted it. I was lucky enough that a human-driven response worked well enough to suit. However, I did leverage bulk tools that I had available.

  • I used GPOs to change my patching behavior to prevent reboots
  • I used GPOs to selectively filter mitigation application until I was ready
  • To easily cold boot all VMs on a host, try Get-VM | Stop-VM -Passthru | Start-VM. Watch for any VMs that don’t want to stop — I deliberately chose not to force them.
  • I could have used Cluster Aware Updating to rollout my BIOS patches. I chose to manually stage the BIOS updates in this case and then allowed the planned patch reboot to handle the final application.

Overall, I did little myself other than manually time the guest power cycles.

Cause 2: Hypervisor Version Mismatch

I like to play with Windows Server Insider builds on one of my lab hosts. I keep its cluster partner at 2016 so that I can study and write articles. Somewhere along the way, I started getting the CPU feature set errors trying to move between them. Enabling the CPU compatibility feature does overcome the block in this case. Hopefully, no one is using Windows Server Insider builds in production, much less mixing them with 2016 hosts in a cluster.

It would stand to reason that this mismatch block will be corrected before 2019 goes RTM. If not, Cluster Rolling Upgrade won’t function with 2016 and 2019 hosts.

Correcting a Live Migration Block Caused by Mixed Host Versions

I hope that if you got yourself into this situation that you know how to get out of it. In my lab, I usually shut the VMs down and move them manually. They are lab systems just like the cluster, so that’s harmless. For the ones that I prefer to keep online, I have CPU compatibility mode enabled.

Do you have a Hyper-V Problem To Tackle?

These common Hyper-V troubleshooting posts have proved quite popular with you guys, but if you think there is something I’ve missed so far and should be covering let me know in the comments below and I’ll try to get around to it! Thanks for reading!

Cracking code-mixing — an important step in making human-computer interaction more engaging – Microsoft Research

ENMLP

Communication is a large part of who we are as human beings, and today, technology has allowed us to communicate in new ways and to audiences much larger and wider than ever before. That technology has assumed single-language speech, which — quite often — does not reflect the way people naturally speak. India, like many other parts of the world, is multilingual on a societal level with most people speaking two or more languages. I speak Bengali, English, and Hindi, as do a lot of my friends and colleagues. When we talk, we move fluidly between these languages without much thought.

This mixing of words and phrases is referred to as code-mixing or code-switching, and from it, we’ve gained such combinations as Hinglish and Spanglish. More than half of the world’s population speaks two or more languages, so with as many people potentially code-switching, creating technology that can process it is important in not only creating useful translation and speech recognition tools, but also in building engaging user interface. Microsoft is progressing on that front in exciting ways.

In Project Mélange, we at Microsoft Research India have been building technologies for processing code-mixed speech and text. Through large-scale computational studies, we are also exploring some fascinating linguistic and behavioral questions around code-mixing, such as why and when people code-mix, that are helping us build technology people can relate to. At the 2018 Conference on Empirical Methods in Natural Language Processing (EMNLP), my colleagues and I have the opportunity to share some of our recent research with our paper “Word Embeddings for Code-Mixed Language Processing.

A data shortage in code-mixed language

Word embeddings — multidimensional vector representation where words similar in meaning or used in similar context are closer to each other — are learnt using deep learning from large language corpora and are valuable in solving a variety of natural language processing tasks using neural techniques. For processing code-mixed language — say, Hinglish — one would ideally need an embedding of words from both Hindi and English in the same space. There are standard methods for obtaining multilingual word embeddings; however, these techniques typically try to map translation equivalents from the two languages (e.g., school and vidyalay) close to each other. This helps in cross-lingual transfer of models. For instance, a sentiment analysis system trained for English can be appropriately transferred to work for Hindi using multilingual embeddings. But it’s not ideal for code-mixed language processing. While school and vidyalay are translation equivalents, in Hinglish, school is far more commonly used than vidyalay; also, these words are used in slightly different contexts. Further, there are grammatical constraints on code-mixing that disallow certain types of direct word substitutions, most notably for verbs in Hinglish. For processing code-mixed language, the word embeddings should ideally be learnt from a corpus of code-mixed text.

It is difficult to estimate the amount of code-mixing that happens in the world. One good proxy is the code-mixing patterns on social media. Approximately 3.5 percent of the tweets on Twitter are code-mixed. The above pie charts show the distribution of monolingual and code-mixed, or code-switched (cs), tweets in seven major European languages: Dutch (nl), English (en), French (fr), German (de), Portuguese (pt), Spanish (es), and Turkish (tr).

The chart above shows the distributions of monolingual and code-mixed tweets for 12 major cities in Europe and the Americas that were found to have very large or very small fractions of code-mixed tweets, represented in the larger pies by the missing white wedge. The smaller pies show the top two code-mixed language pairs, the size being proportionate to their usage. The Microsoft Research India team found that code-mixing is more prevalent in cities where English is not the major language used to tweet.

Even though code-mixing is extremely common in multilingual societies, it happens in casual speech and rarely in text, so we’re limited in the amount of text data available for code-mixed language. What little we do have is from informal speech conversations, such as interactions on social media, where people write almost exactly how they speak. To address this challenge, we developed a technique to generate natural-looking code-mixed data from monolingual text data. Our method is based on a linguistic model known as the equivalence constraint theory of code-mixing, which imposes several syntactic constraints on code-mixing. In building the Spanglish corpus, for example, we used Bing Microsoft Translator to first translate an English sentence into Spanish. Then we aligned the words, identifying which English word corresponded to the Spanish word, and in a process called parsing identified in the sentences the phrases and how they’re related. Then using the equivalence constraint theory, we systematically generated all possible valid Spanglish versions of the input English sentence. A small number of the generated sentences were randomly sampled based on certain criteria that indicated how close they were to natural Spanglish data, and these sentences comprise our artificial Spanglish corpus. Since there is no dearth of monolingual English and Spanish sentences, using this fully automated technique, we can generate as large a Spanglish corpus as we want.

Solving NLP tasks with an artificially generated corpus

Through experiments on parts-of-speech tagging and sentiment classification, we showed that word embeddings learnt from the artificially generated Spanglish corpus were more effective in solving these NLP tasks for code-mixed language than the standard cross-lingual embedding techniques.

The linguistic theory–based generation of code-mixed text has applications beyond word embeddings. For instance, in one of our previous studies published earlier this year, we showed that this technique helps us in learning better language models that can help us build better speech recognition systems for code-mixed speech. We are exploring its application in machine translation to improve the accuracy of mixed-language requests. And imagine a multilingual chatbot that can code-mix depending on who you are, the context of the conversation, and what topic is being discussed, and switch in a natural and appropriate way. That would be true engagement.

How PowerShell Direct helps polish off those VMs

PowerShell Direct is a very worthwhile tool for Hyper-V shops, especially if you haven’t finished configuring your VMs, but there are some caveats to its use.

Microsoft released PowerShell 2.0 in late 2009, which introduced an eagerly awaited feature: the ability to connect and manage remote machines over Web Services for Management (WSMan). PowerShell Core 6.0 debuted Secure Shell (SSH) connections for PowerShell remoting in January 2018, but it’s a slightly older remoting option, PowerShell Direct, that deserves a closer look.

Microsoft released PowerShell Direct with Windows Server 2016 and made it available on Windows 10. While WSMan and SSH require network connectivity to the remote machine, PowerShell Direct uses the VMBus to let you to remote into the virtual machine (VM) without network connectivity.

How to get started with PowerShell Direct

PowerShell Direct requires Windows PowerShell 5.1 or later. There are a number of other conditions to use PowerShell Direct to administer VMs on a Hyper-V host:

  • The Hyper-V host must run Windows Server 2016 or later, or Windows 10 (Creators Edition or later).
  • You must be logged on to the Hyper-V host as a Hyper-V administrator.
  • The VM must be either Windows 10 (Creators Edition or later) or Windows Server 2016 or later.
  • PowerShell must be 5.1 or later.
  • PowerShell needs to run with elevated privileges.
  • The VM to manage must be on the local Hyper-V host and running.
  • You need credentials on the VM.

PowerShell Direct isn’t available to connect to Linux VMs, but you can use hvc.exe to connect to a Linux VM using SSH over the VMBus.

[embedded content]
Using PowerShell Direct for Hyper-V management.

If you have a cluster of Hyper-V hosts, you can only connect to VMs with PowerShell Direct on the host you’re logged on to. If a VM is running on another host, then you need to use WSMan-based PowerShell remoting in Windows PowerShell 5.1 and PowerShell Core 6.0 and up, SSH-based remoting in PowerShell Core 6.0 and up, or move the VM to the host you are on.

How to work with PowerShell Direct

With a standard WSMan-based PowerShell remoting session you supply the computer name to which you want to connect. In PowerShell Direct, you use the name of the VM or the VMId:

Get-VM W16AS01 | Format-List VMname, VMId

VMName : W16AS01

VMId : 2a1eabc2-e3cd-495c-a91f-51a1ad43104c

The VMId is a GUID; it takes a lot more effort to type correctly then the VM name. If the name of your VM isn’t the same as the name of the machine inside the VM, be sure to use the VM name not the machine name.

The following commands create a remoting session to a VM:

$cred = Get-Credential -Credential manticorerichard

$s = New-PSSession -VMName W16AS01 -Credential $cred

$s1 = New-PSSession -VMId ‘2a1eabc2-e3cd-495c-a91f-51a1ad43104c’ -Credential $cred

$s2 = New-PSSession -VMGuid ‘2a1eabc2-e3cd-495c-a91f-51a1ad43104c’ -Credential $cred

If you don’t supply a credential, you’ll be prompted for one. VMGuid is a parameter alias for VMId.

After establishing a session, you can perform administration tasks as shown in the figure:

PowerShell Direct session
Figure 1. PowerShell Direct establishes a connection on a Hyper-V VM on a remote machine to run management jobs.

Invoke-Command -Session $s -ScriptBlock {Get-Process}

The following cmdlets have a -VMname and -VMid parameter: Enter-PSSession, Get-PSSession, Invoke-Command, New-PSSession, and Remove-PSSession

You can use Invoke-Command directly against the VM without creating a remoting session:

Invoke-Command -VMName W16AS01 -ScriptBlock {$env:COMPUTERNAME} -Credential $cred W16AS01

You must remember to supply the credentials for the machine each time you use Invoke-Command if you haven’t created a remoting session.

One thing that trips up many new PowerShell users is the double-hop problem with PowerShell remoting. Take the following command:

Invoke-Command -ComputerName W16AS01 -ScriptBlock {Invoke-Command -ComputerName W16DC01 -ScriptBlock {$env:COMPUTERNAME}}

In this instance, the admin connects to a remote machine and then tries to run a command against a different remote machine, hence a double hop. The problem is Active Directory domains use Kerberos to manage authentication, and Kerberos does not allow credentials to be delegated to the first remote machine to use them to access the second machine. The attempt fails. You can overcome this by using Credential Security Support Provider to delegate your credentials, but a better approach is to plan ahead to avoid a double hop.

PowerShell Direct will allow the double hop because it doesn’t use the WSMan-Kerberos approach, which avoids the delegation problem.

Invoke-Command -Session $s -ScriptBlock {Invoke-Command -ComputerName W16DC01 -ScriptBlock {$env:COMPUTERNAME}}

W16DC01

Invoke-Command -Session $s -ScriptBlock {Get-ADuser -Identity Richard}

PSComputerName  : W16AS01

RunspaceId    : 8351563d-57f6-4af6-aa77-ba00aa48490d

DistinguishedName : CN=Richard,CN=Users,DC=Manticore,DC=org

Enabled      : True

GivenName     : Richard

Name       : Richard

ObjectClass    : user

ObjectGUID    : 9b2b7185-15d2-4014-bc2c-41b04f5f6198

SamAccountName  : Richard

SID        : S-1-5-21-759617655-3516038109-1479587680-1104

Surname      :

UserPrincipalName : Richard@Manticore.org

You can also connect to a VM using a PowerShell Direct connection and work remotely. For this tutorial, I’m using PowerShell Core 6.0.1.

Enter credentials
Figure 2. Enter your credentials to access the Hyper-V VM.

Start by getting the credentials you’ll need for the remote machine. PowerShell Core doesn’t include the Hyper-V module, so you’ll import that into your PowerShell session. The Hyper-V module runs in PowerShell Core.

PowerShell Core 6.0 debuted Secure Shell (SSH) connections for PowerShell remoting in January 2018, but it’s a slightly older remoting option, PowerShell Direct, that deserves a closer look.

The standard PowerShell Core 6.0 configuration doesn’t include the Windows PowerShell 5.1 modules on the module path, so the auto-load is enabled. PowerShell Core 6.1, which does automatically include Windows PowerShell 5.1 modules on its module path, is now available. Users have six months from the September 2018 release of PowerShell Core 6.1 to upgrade from PowerShell Core 6.0 to stay in support.

Create and enter the session to the remote machine. If you already have an existing session, you can enter that. For example:

Enter-PSSession -Session $s

[W16AS01]: PS C:UsersRichardDocuments>

In either case, the prompt changes to include the VM name. You can then work interactively as if you were logged on to the remote machine. Use Exit-PSSession to leave the PowerShell remote session.

Be mindful of your PowerShell Core versions

When you use PowerShell Direct, by default you connect to the Windows PowerShell 5.1 instance on the VM. This happens when you connect from a Windows PowerShell 5.1 or a PowerShell Core 6.0 instance on your Hyper-V host.

This behavior changes in PowerShell Core 6.1.  If you have Windows PowerShell 5.1 and PowerShell Core 6.1 on the host and only Windows PowerShell 5.1 on the VM, then you connect to the VM’s Windows PowerShell 5.1 instance as before.

If you have Windows PowerShell 5.1 and PowerShell Core 6.1 on the host and on the VM, PowerShell Direct attempts to connect to PowerShell Core 6.1 (using the pwsh.exe executable) first and if it is unable to connect, drops back to Windows PowerShell 5.1 (using the PowerShell.exe executable). This means you have to think about the PowerShell functionality on the VM and if it will run under PowerShell Core 6.1.

Keep an eye on the PowerShell project to stay on top of the changes from the Microsoft team. It’s always a good idea to test the $PSVersionTable on the VM to check the version of PowerShell.

Digital transformation budgets are on the rise for 2019

Digital transformation budgets are on the rise for 2019, according to industry analyst predictions, with investments in technologies for those initiatives driving the increase.

In an October report, Gartner projected a 3.2% increase in worldwide IT spending, going from $3.7 trillion in 2018 to $3.8 trillion next year.

Meanwhile, an annual survey from Spiceworks, released in September, found 89% of companies expect their digital transformation budgets to grow or stay steady in the upcoming year.

Such findings indicate a growing enterprise demand for the technologies that will help them transform their organizations into digital companies that are able to compete in the new economy, as they pay to shed legacy systems in favor of cloud services and other agile capabilities.

John-David Lovelock, chief forecaster, GartnerJohn-David Lovelock

“This is all about the move to digital business,” said John-David Lovelock, chief forecaster in Gartner’s technology and service provider research group. “They’re moving to a digital ecosystem, where they need to work more tightly across their business, with their supply chain and with their customers. They’re moving toward more dynamic environments, and they can’t do it with legacy systems and … customized code.”

Todd Tetreault, CIO of Dorel Juvenile, a maker of juvenile products based in Montreal, is seeing such trends play out in his own IT budget. Although he couldn’t release dollar figures, Tetreault said he expects his digital transformation budget to jump about 10% for 2019.

He explained that much of the increased spending will go toward “closing technical debt,” such as switching off legacy systems that are slow to configure, cumbersome to change and expensive to operate, in favor of more nimble services, such as SaaS products.

Identifying the spending priorities

“There’s a huge amount of market transformation, and our platforms have to keep up,” he said.

Tetreault said his IT budget increase also reflects growing investments in capabilities to support new business initiatives, as the company seeks to better engage customers directly and to compete in a digital marketplace that has decimated retailers, such as Toys R Us — where its products have traditionally been sold.

We can’t rely on brick-and-mortar shelf space anymore. We have to be digital.
Todd TetreaultCIO of Dorel Juvenile

“We can’t rely on brick-and-mortar shelf space anymore. We have to be digital,” he said, adding that such market forces are spurring investments in programs critical to the company’s digital initiatives, as well as those that strengthen the company’s data curation, so it can effectively support data-driven marketing and customer engagement.

He said he’s upgrading the company’s ERP system to increase its interoperability, as well as investing in collaboration software to enable agility among teams. For instance, he invested in cloud-based project management software from Clarizen to reduce friction among workers as they seek to move quickly on product development projects.

The Spiceworks survey shows that many organizations are looking at similar spending priorities. It found 56% of digital transformation budgets at large enterprises — companies with 5,000 or more employees — will be up in 2019, with 64% of them saying the need to upgrade outdated IT infrastructure is driving the increases. They also cited growing security investments as driving increased spending.

Tammy Bilitzky, CIO, Data Conversion LaboratoryTammy Bilitzky

Tammy Bilitzky, CIO of Data Conversion Laboratory (DCL), based in Fresh Meadows, N.Y., said she is seeing that mix of market forces influence her budget figures for 2019 — as it has for years.

“The criticality of technology has never been a question at DCL,” she said. “Our investment in IT has been increasing, year over year, both in dollar figure and percentage of revenue, and we are budgeting significant increases in 2019.”

Bilitzky listed several specific initiatives as the primary factors contributing to budget growth for 2019. Those initiatives include security-related projects, R&D work around artificial intelligence and process automation, and infrastructure improvements to further shed legacy systems. She said she also plans to hire more technologists — something she has been doing steadily for the past several years.

Aligning the budget with business goals is key

“Our IT budget is very closely aligned with our business goals, [both] tactical and strategic,” she added. “It is a reciprocal process — business projects drive technology funding and technology requirements, such as security, and the European Union’s General Data Protection Regulation results in correlated business projects and spending.”

Spiceworks found organizations expecting digital transformation budget increases in 2019 will do so by an average of 20%. Its survey found upgrading IT infrastructure is by far the biggest driver for higher IT budgets, followed by an increased priority for IT projects, security concerns, employee growth and regulatory changes.

But CIOs and analysts said such survey findings and overall budget figures only tell part of the story. Spending is up in some areas, they said, but CIOs still focus on driving efficiencies and reducing costs in other areas — particularly among commodity IT products and services.

Lovelock pointed to Gartner’s findings: Spending on commodity items, such as communications and data center technologies, is expected to be either flat or down, while spending on IT services and software is expected to go up. He said he thinks organizations should look at their IT spending not as capital expenditures and operational expenditures, but rather what should be considered commodity versus differentiated customization.

Leading organizations are “investing where it makes the difference,” he said, noting that they’re spending on emerging technologies to build out their artificial intelligence, blockchain and IoT platforms. Many companies have increased their spending on cybersecurity, as security has become a board-level priority, Lovelock added.

Digital add-on versus digital transformation

Lovelock and others also noted their research into digital transformation spending shows leading organizations understand that technology transforms their businesses and have set their budgets accordingly, while organizations that still talk about being tech-enabled or see IT as a cost center are setting budgets that mirror those ideas, with money going into add-on technologies and services that either won’t differentiate them or, worse, won’t ultimately support them in the digital economy.

Sheila Jordan, CIO of Symantec, a cybersecurity software and services firm based in Mountain View, Calif., said her 2019 budget, which went into effect April 1 of this year, reflects the company’s overall view of technology as “a natural extension of the company’s strategy.”

Sheila Jordan, CIO, SymantecSheila Jordan

Jordan said IT is still expected to optimize its spending, so she and her team are looking at where processes can be automated, commoditized and even shed.

However, Jordan said her 2019 budget is still up over 2018, although she could not share actual figures. She said the increases are concentrated in four areas — security, regulatory and compliance, SaaS and AI — that she sees as critical for the company’s digital evolution and specifically for its efforts to create a seamless end-to-end customer experience designed “to surprise and delight.”

“That,” she added, “requires more work and more heavy lifting for most IT departments, including ours.”

And for most organizations, including Symantec, it requires more money.

For Sale – Acer Predator XB271H Gaming Monitor 170hz G-sync 1ms

Amazing 27″ monitor for sale.

This thing is an absolute beast, however I’m upgrading to 4k and need to sell.

It’s really hard for me to part with this monitor as it’s been the central part of my build.

Collection only or if you pay for courier I will get it delivered.

Price and currency: 400
Delivery: Delivery cost is not included
Payment method: Bank Transfer or Paypal Gift
Location: Nottingham
Advertised elsewhere?: Not advertised elsewhere
Prefer goods collected?: I prefer the goods to be collected

______________________________________________________
This message is automatically inserted in all classifieds forum threads.
By replying to this thread you agree to abide by the trading rules detailed here.
Please be advised, all buyers and sellers should satisfy themselves that the other party is genuine by providing the following via private conversation to each other after negotiations are complete and prior to dispatching goods and making payment:

  • Landline telephone number. Make a call to check out the area code and number are correct, too
  • Name and address including postcode
  • Valid e-mail address

DO NOT proceed with a deal until you are completely satisfied with all details being correct. It’s in your best interest to check out these details yourself.