Tag Archives: their

Hyper-V Virtual CPUs Explained

Did your software vendor indicate that you can virtualize their application, but only if you dedicate one or more CPU cores to it? Not clear on what happens when you assign CPUs to a virtual machine? You are far from alone.

Note: This article was originally published in February 2014. It has been fully updated to be relevant as of November 2019.

Introduction to Virtual CPUs

Like all other virtual machine “hardware”, virtual CPUs do not exist. The hypervisor uses the physical host’s real CPUs to create virtual constructs to present to virtual machines. The hypervisor controls access to the real CPUs just as it controls access to all other hardware.

Hyper-V Never Assigns Physical Processors to Specific Virtual Machines

Make sure that you understand this section before moving on. Assigning 2 vCPUs to a system does not mean that Hyper-V plucks two cores out of the physical pool and permanently marries them to your virtual machine. You cannot assign a physical core to a VM at all. So, does this mean that you just can’t meet that vendor request to dedicate a core or two? Well, not exactly. More on that toward the end.

Understanding Operating System Processor Scheduling

Let’s kick this off by looking at how CPUs are used in regular Windows. Here’s a shot of my Task Manager screen:

Task Manager

Task Manager

Nothing fancy, right? Looks familiar, doesn’t it?

Now, back when computers never, or almost never, shipped as multi-CPU multi-core boxes, we all knew that computers couldn’t really multitask. They had one CPU and one core, so there was only one possible active thread of execution. But aside from the fancy graphical updates, Task Manager then looked pretty much like Task Manager now. You had a long list of running processes, each with a metric indicating what percentage of the CPUs time it was using.

Then, as in now, each line item you see represents a process (or, new in the recent Task Manager versions, a process group). A process consists of one or more threads. A thread is nothing more than a sequence of CPU instructions (keyword: sequence).

What happens is that (in Windows, this started in 95 and NT) the operating system stops a running thread, preserves its state, and then starts another thread. After a bit of time, it repeats those operations for the next thread. We call this pre-emptive, meaning that the operating system decides when to suspend the current thread and switch to another. You can set priorities that affect how a process rates, but the OS is in charge of thread scheduling.

Today, almost all computers have multiple cores, so Windows can truly multi-task.

Taking These Concepts to the Hypervisor

Because of its role as a thread manager, Windows can be called a “supervisor” (very old terminology that you really never see anymore): a system that manages processes that are made up of threads. Hyper-V is a hypervisor: a system that manages supervisors that manage processes that are made up of threads.

Task Manager doesn’t work the same way for Hyper-V, but the same thing is going on. There is a list of partitions, and inside those partitions are processes and threads. The thread scheduler works pretty much the same way, something like this:

Hypervisor Thread Scheduling

Hypervisor Thread Scheduling

Of course, a real system will always have more than nine threads running. The thread scheduler will place them all into a queue.

What About Processor Affinity?

You probably know that you can affinitize threads in Windows so that they always run on a particular core or set of cores. You cannot do that in Hyper-V. Doing so would have questionable value anyway; dedicating a thread to a core is not the same thing as dedicating a core to a thread, which is what many people really want to try to do. You can’t prevent a core from running other threads in the Windows world or the Hyper-V world.

How Does Thread Scheduling Work?

The simplest answer is that Hyper-V makes the decision at the hypervisor level. It doesn’t really let the guests have any input. Guest operating systems schedule the threads from the processes that they own. When they choose a thread to run, they send it to a virtual CPU. Hyper-V takes it from there.

The image that I presented above is necessarily an oversimplification, as it’s not simple first-in-first-out. NUMA plays a role, for instance. Really understanding this topic requires a fairly deep dive into some complex ideas. Few administrators require that level of depth, and exploring it here would take this article far afield.

The first thing that matters: affinity aside, you never know where any given thread will execute. A thread that was paused to yield CPU time to another thread may very well be assigned to another core when it is resumed. Did you ever wonder why an application consumes right at 50% of a dual-core system and each core looks like it’s running at 50% usage? That behavior indicates a single-threaded application. Each time the scheduler executes it, it consumes 100% of the core that it lands on. The next time it runs, it stays on the same core or goes to the other core. Whichever core the scheduler assigns it to, it consumes 100%. When Task Manager aggregates its performance for display, that’s an even 50% utilization — the app uses 100% of 50% of the system’s capacity. Since the core not running the app remains mostly idle while the other core tops out, they cumulatively amount to 50% utilization for the measured time period. With the capabilities of newer versions of Task Manager, you can now instruct it to show the separate cores individually, which makes this behavior far more apparent.

Now we can move on to a look at the number of vCPUs assigned to a system and priority.

What Does the Number of vCPUs Assigned to a VM Really Mean?

You should first notice that you can’t assign more vCPUs to a virtual machine than you have logical processors in your host.

Invalid CPU Count

Invalid CPU Count

So, a virtual machine’s vCPU count means this: the maximum number of threads that the VM can run at any given moment. I can’t set the virtual machine from the screenshot to have more than two vCPUs because the host only has two logical processors. Therefore, there is nowhere for a third thread to be scheduled. But, if I had a 24-core system and left this VM at 2 vCPUs, then it would only ever send a maximum of two threads to Hyper-V for scheduling. The virtual machine’s thread scheduler (the supervisor) will keep its other threads in a queue, waiting for their turn.

But Can’t I Assign More Total vCPUs to all VMs than Physical Cores?

Yes, the total number of vCPUs across all virtual machines can exceed the number of physical cores in the host. It’s no different than the fact that I’ve got 40+ processes “running” on my dual-core laptop right now. I can only run two threads at a time, but I will always far more than two threads scheduled. Windows has been doing this for a very long time now, and Windows is so good at it (usually) that most people never see a need to think through what’s going on. Your VMs (supervisors) will bubble up threads to run and Hyper-V (hypervisor) will schedule them the way (mostly) that Windows has been scheduling them ever since it outgrew cooperative scheduling in Windows 3.x.

What’s The Proper Ratio of vCPU to pCPU/Cores?

This is the question that’s on everyone’s mind. I’ll tell you straight: in the generic sense, this question has no answer.

Sure, way back when, people said 1:1. Some people still say that today. And you know, you can do it. It’s wasteful, but you can do it. I could run my current desktop configuration on a quad 16 core server and I’d never get any contention. But, I probably wouldn’t see much performance difference. Why? Because almost all my threads sit idle almost all the time. If something needs 0% CPU time, what does giving it its own core do? Nothing, that’s what.

Later, the answer was upgraded to 8 vCPUs per 1 physical core. OK, sure, good.

Then it became 12.

And then the recommendations went away.

They went away because no one really has any idea. The scheduler will evenly distribute threads across the available cores. So then, the amount of physical CPUs needed doesn’t depend on how many virtual CPUs there are. It depends entirely on what the operating threads need. And, even if you’ve got a bunch of heavy threads going, that doesn’t mean their systems will die as they get pre-empted by other heavy threads. The necessary vCPU/pCPU ratio depends entirely on the CPU load profile and your tolerance for latency. Multiple heavy loads require a low ratio. A few heavy loads work well with a medium ratio. Light loads can run on a high ratio system.

I’m going to let you in on a dirty little secret about CPUs: Every single time a thread runs, no matter what it is, it drives the CPU at 100% (power-throttling changes the clock speed, not workload saturation). The CPU is a binary device; it’s either processing or it isn’t. When your performance metric tools show you that 100% or 20% or 50% or whatever number, they calculate it from a time measurement. If you see 100%, that means that the CPU was processing during the entire measured span of time. 20% means it was running a process 1/5th of the time and 4/5th of the time it was idle. This means that a single thread doesn’t consume 100% of the CPU, because Windows/Hyper-V will pre-empt it when it wants to run another thread. You can have multiple “100%” CPU threads running on the same system. Even so, a system can only act responsively when it has some idle time, meaning that most threads will simply let their time slice go by. That allows other threads to access cores more quickly. When you have multiple threads always queuing for active CPU time, the overall system becomes less responsive because threads must wait. Using additional cores will address this concern as it spreads the workload out.

The upshot: if you want to know how many physical cores you need, then you need to know the performance profile of your actual workload. If you don’t know, then start from the earlier 8:1 or 12:1 recommendations.

Altaro Dojo Forums
forums logo

Connect with fellow IT pros and master Hyper-V

Moderated by Microsoft MVPs

What About Reserve and Weighting (Priority)?

I don’t recommend that you tinker with CPU settings unless you have a CPU contention problem to solve. Let the thread scheduler do its job. Just like setting CPU priorities on threads in Windows can cause more problems than they solve, fiddling with hypervisor vCPU settings can make everything worse.

Let’s look at the config screen:

vCPU Settings

vCPU Settings

The first group of boxes is the reserve. The first box represents the percentage of its allowed number of vCPUs to set aside. Its actual meaning depends on the number of vCPUs assigned to the VM. The second box, the grayed-out one, shows the total percentage of host resources that Hyper-V will set aside for this VM. In this case, I have a 2 vCPU system on a dual-core host, so the two boxes will be the same. If I set 10 percent reserve, that’s 10 percent of the total physical resources. If I drop the allocation down to 1 vCPU, then 10 percent reserve becomes 5 percent physical. The second box, will be auto-calculated as you adjust the first box.

The reserve is a hard minimum… sort of. If the total of all reserve settings of all virtual machines on a given host exceeds 100%, then at least one virtual machine won’t start. But, if a VM’s reserve is 0%, then it doesn’t count toward the 100% at all (seems pretty obvious, but you never know). But, if a VM with a 20% reserve is sitting idle, then other processes are allowed to use up to 100% of the available processor power… until such time as the VM with the reserve starts up. Then, once the CPU capacity is available, the reserved VM will be able to dominate up to 20% of the total computing power. Because time slices are so short, it’s effectively like it always has 20% available, but it does have to wait like everyone else.

So, that vendor that wants a dedicated CPU? If you really want to honor their wishes, this is how you do it. You enter whatever number in the top box that makes the second box show the equivalent processor power of however many pCPUs/cores the vendor thinks they need. If they want one whole CPU and you have a quad-core host, then make the second box show 25%. Do you really have to? Well, I don’t know. Their software probably doesn’t need that kind of power, but if they can kick you off support for not listening to them, well… don’t get me in the middle of that. The real reason virtualization densities never hit what the hypervisor manufacturers say they can do is because of software vendors’ arbitrary rules, but that’s a rant for another day.

The next two boxes are the limit. Now that you understand the reserve, you can understand the limit. It’s a resource cap. It keeps a greedy VM’s hands out of the cookie jar. The two boxes work together in the same way as the reserve boxes.

The final box is the priority weight. As indicated, this is relative. Every VM set to 100 (the default) has the same pull with the scheduler, but they’re all beneath all the VMs that have 200 and above all the VMs that have 50, so on and so forth. If you’re going to tinker, weight is safer than fiddling with reserves because you can’t ever prevent a VM from starting by changing relative weights. What the weight means is that when a bunch of VMs present threads to the hypervisor thread scheduler at once, the higher-weighted VMs go first.

But What About Hyper-Threading?

Hyper-Threading allows a single core to operate two threads at once — sort of. The core can only actively run one of the threads at a time, but if that thread stalls while waiting for an external resource, then the core operates the other thread. You can read a more detailed explanation below in the comments section, from contributor Jordan. AMD has recently added a similar technology.

To kill one major misconception: Hyper-Threading does not double the core’s performance ability. Synthetic benchmarks show a high-water mark of a 25% improvement. More realistic measurements show closer to a 10% boost. An 8-core Hyper-Threaded system does not perform as well as a 16-core non-Hyper-Threaded system. It might perform almost as well as a 9-core system.

With the so-called “classic” scheduler, Hyper-V places threads on the next available core as described above. With the core scheduler, introduced in Hyper-V 2016, Hyper-V now prevents threads owned by different virtual machines from running side-by-side on the same core. It will, however, continue to pre-empt one virtual machine’s threads in favor of another’s. We have an article that deals with the core scheduler.

Making Sense of Everything

I know this is a lot of information. Most people come here wanting to know how many vCPUs to assign to a VM or how many total vCPUs to run on a single system.

Personally, I assign 2 vCPUs to every VM to start. That gives it at least two places to run threads, which gives it responsiveness. On a dual-processor system, it also ensures that the VM automatically has a presence on both NUMA nodes. I do not assign more vCPU to a VM until I know that it needs it (or an application vendor demands it).

As for the ratio of vCPU to pCPU, that works mostly the same way. There is no formula or magic knowledge that you can simply apply. If you plan to virtualize existing workloads, then measure their current CPU utilization and tally it up; that will tell you what you need to know. Microsoft’s Assessment and Planning Toolkit might help you. Otherwise, you simply add resources and monitor usage. If your hardware cannot handle your workload, then you need to scale out.


Go to Original Article
Author: Eric Siron

S/4HANA Cloud integrates Qualtrics for continuous improvement

SAP is focused on better understanding what’s on the minds of their customers with the latest release of S/4HANA Cloud.

SAP S/4HANA Cloud 1911, which is now available, has SAP Qualtrics experience management (XM) embedded into the user interface, creating a feedback loop for the product management team about the application. This is one of the first integrations of Qualtrics XM into SAP products since SAP acquired the company a year ago for $8 billion.

“Users can give direct feedback on the application,” said Oliver Betz, global head of product management for S/4HANA Cloud at SAP. “It’s context-sensitive, so if you’re on a homescreen, it asks you, ‘How do you like the homescreen on a scale of one to five?’ And then the user can provide more detailed feedback from there.”

The customer data is consolidated and anonymized and sent to the S/4HANA Cloud product management team, Betz said.

“We’ll regularly screen the feedback to find hot spots,” he said. “In particular we’re interested in the outliers to the good and the bad, areas where obviously there’s something we specifically need to take care of, or also some areas where users are happy about the new features.”

Oliver BetzOliver Betz

Because S/4HANA Cloud is a cloud product that sends out new releases every quarter, the customer feedback loop that Qualtrics provides will inform developers on how to continually improve the product, Betz said.

“This is the first phase in the next iteration [of S/4HANA Cloud], which will add more granular features,” he said. “From a product management perspective, you can potentially have a new application and have some questions around the application to better understand the usage, what customers like and what they don’t like, and then to take it in a feedback loop to iterate over the next quarterly shipments so we can always provide new enhancements.”

Qualtrics integration may take time to provide value

It has taken a while, but it’s a good thing that SAP has now begun a real Qualtrics integration story, said Jon Reed, analyst and co-founder of Diginomica.com, an analysis and news site that focuses on enterprise applications. Still, SAP faces a few obstacles before the integration into S/4HANA Cloud can be a real differentiator.

Jon ReedJon Reed

“This isn’t a plug-and-play thing where customers are immediately able to use this the way you would a new app on your phone, like a new GPS app. This is useful experiential data which you must then analyze, manage and apply,” Reed said. “Eventually, you could build useful apps and dashboards with it, but you still have to apply the insights to get the value. However, if SAP has made those strides already on integrating Qualtrics with S/4HANA Cloud 1911, that’s a positive for them and we’ll see if it’s an advantage they can use to win sales.”

The Qualtrics products are impressive, but it’s still too early in the game to judge how the SAP S/4HANA integration will work out, said Vinnie Mirchandani, analyst and founder of Deal Architect, an enterprise applications focused blog.

“SAP will see more traction with Qualtrics in the employee and customer experience feedback area,” Mirchandani said. “Experiential tools have more impact where there are more human touchpoints — employees, customer service, customer feedback on product features — so I think the blend with SuccessFactors and C/4HANA is more obvious. This doesn’t mean that S/4 won’t see benefits, but the traction may be higher in other parts of the SAP portfolio.”

Vinnie MirchandaniVinnie Mirchandani

SAP SuccessFactors is also beginning to integrate Qualtrics into its employee experience management functions.

It’s a good thing that SAP is attempting to become a more customer-centric company, but it will need to follow through on the promise and make it a part of the company culture, said Faith Adams, senior analyst who focuses on customer experience at Forrester Research.

Many companies are making efforts to appear to be customer-centric, but aren’t following through with the best practices that are required to become truly customer-centric, like taking actions on the feedback they get, Adams said.

“It’s sometimes more of a ‘check the box’ activity rather than something that is embedded into the DNA or a way of life,” Adams said. “I hope that SAP does follow through on the best practices, but that’s to be determined.”

Bringing analytics to business users

SAP S/4HANA Cloud 1911 also now has SAP Analytics Cloud directly embedded. This will enable business users to take advantage of analytics capabilities without going to separate applications, according to SAP’s Betz.

It comes fully integrated out of the box and doesn’t require configuration, Betz said. Users can take advantage of included dashboards or create their own.

“The majority usage at the moment is in the finance application where you can directly access your [key performance indicators] there and have it all visualized, but also create and run your own dashboards,” he said. “This is about making data more available to business users instead of waiting for a report or something to be sent; everybody can have this information on hand already without having some business analyst putting [it] together.”

Dana GardnerDana Gardner

The embedded analytics capability could be an important differentiator for SAP in making data analytics more democratic across organizations, said Dana Gardner, president of IT consultancy Interarbor Solutions LLC. He believes companies need to break data out of “ivory towers” now as machine learning and AI grow in popularity and sophistication.

“The more people that use more analytics in your organization, the better off the company is,” Gardner said. “It’s really important that SAP gets aggressive on this, because it’s big and we’re going to see much more with machine learning and AI, so you’re going to need to have interfaces with the means to bring the more advanced types of analytics to more people as well.”

Go to Original Article
Author:

Redis Labs eases database management with RedisInsight

The robust market of tools to help users of the Redis database manage their systems just got a new entrant.

Redis Labs disclosed the availability of its RedisInsight tool, a graphical user interface (GUI) for database management and operations.

Redis is a popular open source NoSQL database that is also increasingly being used in cloud-native Kubernetes deployments as users move workloads to the cloud. Open source database use is growing quickly according to recent reports as the need for flexible, open systems to meet different needs has become a common requirement.

Among the challenges often associated with databases of any type is ease of management, which Redis is trying to address with RedisInsight.

“Database management will never go out of fashion,” said James Governor, analyst and co-founder at RedMonk. “Anyone running a Redis cluster is going to appreciate better memory and cluster management tools.”

Governor noted that Redis is following a tested approach, by building out more tools for users that improve management. Enterprises are willing to pay for better manageability, Governor noted, and RedisInsight aims to do that.

RedisInsight based on RDBtools

The RedisInsight tool, introduced Nov. 12, is based on the RDBTools technology that Redis Labs acquired in April 2019. RDBTools is an open source GUI for users to interact with and explore data stores in a Redis database.

Database management will never go out of fashion.
James GovernorAnalyst and co-founder, RedMonk

Over the last seven months, Redis added more capabilities to the RDBTools GUI, expanding the product’s coverage for different applications, said Alvin Richards, chief product officer at Redis.

One of the core pieces of extensibility in Redis is the ability to introduce modules that contain new data structures or processing frameworks. So for example, a module could include time series, or graph data structures, Richards explained.

“What we have added to RedisInsight is the ability to visualize the data for those different data structures from the different modules,” he said. “So if you want to visualize the connections in your graph data for example, you can see that directly within the tool.”

RedisInsight overview dashboard
RedisInsight overview dashboard

RDBTools is just one of many different third-party tools that exist for providing some form of management and data insight for Redis. There are some 30 other third-party GUI tools in the Redis ecosystem, though lack of maturity is a challenge.

“They tend to sort of come up quickly and get developed once and then are never maintained,” Richards said. “So, the key thing we wanted to do is ensure that not only is it current with the latest features, but we have the apparatus behind it to carry on maintaining it.”

How RedisInsight works

For users, getting started with the new tool is relatively straightforward. RedisInsight is a piece of software that needs to be downloaded and then connected to an existing Redis database. The tool ingests all the appropriate metadata and delivers the visual interface to users.

RedisInsight is available for Windows, macOS and Linux, and also available as a Docker container. Redis doesn’t have a RedisInsight as a Service offering yet.

“We have considered having RedisInsight as a service and it’s something we’re still working on in the background, as we do see demand from our customers,” Richards said. “The challenge is always going to be making sure we have the ability to ensure that there is the right segmentation, security and authorization in place to put guarantees around the usage of data.”

Go to Original Article
Author:

How PowerCLI automation brings PowerShell capabilities to VMware

VMware admins can use PowerCLI to automate many common tasks and operations in their data centers and perform them at scale. Windows PowerShell executes PowerCLI commands via cmdlets, which are abbreviated lines of code that perform singular, specific functions.

Automation can help admins keep a large, virtualized environment running smoothly. It helps with resource and workload provisioning. It also adds speed and consistency to most operations, since an automated task should behave the same way every time. And because automation can guide daily repetitions of testing, configuration and deployment without introducing the same errors that a tired admin might, it aids in the development of modern software as well.

PowerShell provides easy automation for Windows environments. VMware admins can also use the capabilities of PowerShell, however, with the help of VMware’s PowerCLI, which uses PowerShell as a framework to execute automated tasks on VMware environments.

PowerShell and PowerCLI

In a VMware environment, PowerCLI automation and management is provided at scale in a quicker way than using a GUI via the PowerShell framework. PowerCLI functions as a command-line interface (CLI) tool that “snaps into” PowerShell, which executes its commands through cmdlets. PowerCLI cmdlets can manage infrastructure components, such as High Availability, Distributed Resource Scheduler and vMotion, and can perform tasks such as gathering information, powering on and off VMs, and altering workloads and files.

In a single line of code, admins can enact mass changes to an entire VMware environment.

PowerShell commands consist of a function, which defines an action to take, and a cmdlet, which defines an object on which to perform that action. Parameters provide additional detail and specificity to PowerShell commands. In a single line of code, admins can enact mass changes to an entire VMware environment.

Common PowerCLI cmdlets

You can automate vCenter and vSphere using a handful of simple cmdlets.

With just five cmdlets, you can execute most major vCenter tasks. To obtain information about a VM — such as a VM’s name, power state, guest OS and ESXi host — use the Get-VM cmdlet. To modify an existing vCenter VM, use Set-VM. An admin can use Start-VM to start a single VM or many VMs at once. To stop a VM use Stop-VM, which simply shuts down a VM immediately, or Stop-VMGuest, which performs a more graceful shutdown. You can use these cmdlets to perform any of these tasks at scale across an entire data center.

You can also automate vSphere with PowerCLI. One of the most useful cmdlets for vSphere management is Copy-VMGuestFile, which enables an admin to copy files and folders from a local machine to a vSphere VM. Admins can add a number of parameters to this cmdlet to fine-tune vSphere VM behavior. For example, there is -GuestCredential, which authenticates a VM, and -GuestToLocal, which reverses the flow of information.

Recent updates to PowerCLI and PowerShell

PowerCLI features over 500 separate commands, and the list is only growing. In June 2019, VMware released PowerCLI 11.3, which added 22 new cmdlets for HCX management and support for opaque networks, additional network adapter types and high-level promotion of instant clones.

PowerShell is more than simply PowerCLI, of course. In May 2019, Microsoft released the most recent version of PowerShell: PowerShell 7, which includes several new APIs in the .NET Core 3.0 runtime. At the PowerShell summit in September 2019, Microsoft announced several other developments to PowerShell programming.

PowerShell now works with AWS serverless computing, which enables you to manage a Windows deployment without managing a Windows Server machine. So, you can run PowerShell on an API and use it to run serverless events, such as placing an image in an AWS Simple Storage Service bucket and converting that image to multiple resolutions.

PowerShell also offers a service called Simple Hierarchy in PowerShell (SHiPS). An admin can use SHiPS to build a hierarchical file system provider from scratch and bypass the normal complexity of such a task. SHiPS reduces the amount of code it takes to write a provider module from thousands of lines to around 20.

Go to Original Article
Author:

Introducing more privacy transparency for our commercial cloud customers

At Microsoft, we listen to our customers and strive to address their questions and feedback, because one of our foundational principles is to help our customers succeed. Today Microsoft is announcing an update to the privacy provisions in the Microsoft Online Services Terms (OST) in our commercial cloud contracts that stems from additional feedback we’ve heard from our customers.

Our updated OST will reflect contractual changes we have developed with one of our public sector customers, the Dutch Ministry of Justice and Security (Dutch MoJ). The changes we are making will provide more transparency for our customers over data processing in the Microsoft cloud.

Microsoft is currently the only major cloud provider to offer such terms in the European Economic Area (EEA) and beyond.

We are also announcing that we will offer the new contractual terms to all our commercial customers – public sector and private sector, large enterprises and small and medium businesses – globally. At Microsoft we consider privacy a fundamental right, and we believe stronger privacy protections through greater transparency and accountability should benefit our customers everywhere.

Clarifying Microsoft’s responsibilities for cloud services under the OST update

In anticipation of the General Data Protection Regulation (GDPR), Microsoft designed most of its enterprise services as services where we are a data processor for our customers, taking the necessary steps to comply with the new data protection laws in Europe. At a basic level, this means Microsoft collects and uses personal data from its enterprise services to provide the online services requested by our customers and for the purposes instructed by our customers. As a processor, Microsoft ensures the integrity and safety of customer data, but that data itself is owned, managed and controlled by the customer.

Through the OST update we are announcing today we will increase our data protection responsibilities for a subset of processing that Microsoft engages in when we provide enterprise services. In the OST update, we will clarify that Microsoft assumes the role of data controller when we process data for specified administrative and operational purposes incident to providing the cloud services covered by this contractual framework, such as Azure, Office 365, Dynamics and Intune. This subset of data processing serves administrative or operational purposes such as account management; financial reporting; combatting cyberattacks on any Microsoft product or service; and complying with our legal obligations.

The change to assert Microsoft as the controller for this specific set of data uses will serve our customers by providing further clarity about how we use data, and about our commitment to be accountable under GDPR to ensure that the data is handled in a compliant way.

Meanwhile, Microsoft will remain the data processor for providing the services, improving and addressing bugs or other issues related to the service, ensuring security of the services, and keeping the services up to date.

As noted above, the updated OST reflects the contractual changes we developed with the Dutch MOJ.  The only substantive differences in the updated terms relate to customer-specific changes requested by the Dutch MOJ, which had to be adapted for the broader global customer base.

The work to provide our updated OST has already begun. We anticipate being able to offer the new contract provisions to all public sector and enterprise customers globally at the beginning of 2020.

Working with our customers to strengthen privacy

Before and after GDPR became law in the EU, Microsoft has taken steps to ensure that we protect the privacy of all who use our products and services. We continue to work on behalf of customers to remain aligned with the evolving legal interpretations of GDPR.  For example, customer feedback from the Dutch MoJ and others has led to the global roll out of a number of new privacy tools across our major services, specific changes to Office 365 ProPlus as well as increased transparency regarding use of diagnostic data.

We remain committed to listening closely to our customers’ needs and concerns regarding privacy. Whenever customer questions arise, we stand ready to focus our engineering, legal and business resources on implementing measures that our customers require. At Microsoft, this is part of our mission to empower every individual and organization on the planet to achieve more.

 This post is also available in Dutch, French and German.

Tags: , ,

Go to Original Article
Author: Microsoft News Center

Salesforce.org Education Cloud updates enhance student engagement

Students will be able to better engage with school staff and track their college coursework with the help of some new features in Salesforce.org Education Cloud.

These features can help students and staff get a better view of the student journey throughout the college lifecycle without having to use external systems and help better connect K-12 schools in the Salesforce ecosystem.

Higher education is an industry that lags in terms of digital transformation, said Joyce Kim, a higher education analyst at Ovum. But Salesforce’s foundation in the enterprise has a lot of applicability to the higher-education model.

“Student retention and completion are really important targets for institutions but having the right data and insights that will help a school achieve those goals is a challenge,” Kim said.

Enriching the student journey

A feature that could have a widespread effect is Salesforce Advisor Link Pathways, which assists in degree planning and helps keep students on track to graduate.

Currently, staff and students in the San Mateo County Community College District (SMCCCD) use a third-party system called DegreeWorks to aid with degree planning. Students currently have access to Salesforce, but they can only see members of their success team and alerts from faculty — such as a student failed a test, and tasks such as applying for a summer internship, resume review and updating a LinkedIn profile. The Pathways feature can bring degree planning right into the Salesforce system, eliminating the need for a third-party app.

“The big thing for staff and students is being able to have everyone integrated onto one system, and be able to take action in real time,” said Karrie Mitchell, vice president of planning for the SMCCCD. “There are so many different systems that are siloed, and Education Cloud brings it all together.”

Student retention and completion are really important targets for institutions but having the right data and insights that will help a school achieve those goals is a challenge.
Joyce KimHigher education analyst, Ovum

Many students take extra credits and extra student loan debt that don’t add up to a degree, and this will help advisers proactively manage and support students to make it to their graduation goal, said Nathalie Mainland, senior vice president and general manager of Education Cloud at Salesforce.org. Salesforce.com acquired Salesforce.org in April 2019 at a price of $300 million.

Another feature that could be beneficial to SMCCCD is the Einstein Analytics template for recruitment and admissions, using the Education Data Architecture as a foundation. These templates may help admission staff find trends in each year’s class, including demographics, areas of study and who’s taking what classes — and prevent the need to manually input information into the system, Mitchell said.

“This gives you a 360-degree view of the student, and that’s critical for us,” said Daman Grewal, CTO at SMCCCD.

Other new features

Also, in the Education Data Architecture, Salesforce is adding application and test score objects, making it easier to bring in data for the recruiting and admission process, such as application data and test scores. Previously, schools were doing custom builds, and now there will be a standardized way to bring this data into the system, Mainland said.

Other new Salesforce Advisor Link features include queue management — the No. 1 most requested feature from customers — and Salesforce Advisor Link for onboarding and pre-advising. Queue management will enable students to proactively make an appointment with their advisers, and the onboarding and pre-advising feature help catch students that received offer letters to be sure they accept and show up on campus.

And while K-12 institutions already have been using Salesforce.org Education Cloud, there is now a K-12 architecture kit. This will accelerate schools’ ability to use Education Cloud and Salesforce technology, and users will no longer have to customize it themselves, Mainland said.

While Oracle, Ellucian, Jenzabar and Campus management are all competing vendors with end-to-end CRM suites, Salesforce is positioned competitively in the higher-education CRM market, Kim said.

“Because of its user-friendly interface and ability to use emerging technologies for things like predictive analytics and automating processes, end users find their products are intuitive and effective,” she said.  

Salesforce plans to dig into this Education Cloud news during Dreamforce, which takes place Nov. 19 to 22 in San Francisco, Mainland said.

The Education Data Architecture and K-12 architecture kits are both free, open source and available to both Salesforce and non-Salesforce users. They will be available on AppExchange and GitHub.

The Education Data Architecture features will be available in January. Everything else will be available by Nov. 18.

Go to Original Article
Author:

SAP Embrace gets new love from SAP-Microsoft partnership

SAP and Microsoft are making their cloud relationship almost exclusive with a new program.

SAP Embrace was announced in May as a program to help SAP customers move workloads to public cloud hyperscalers Microsoft Azure, AWS and Google Cloud Platform (GCP). Earlier this month, SAP and Microsoft announced a new development in the program with a three-year agreement to use Microsoft Azure as the preferred hyperscaler infrastructure provider for SAP systems. The deal is intended to address SAP’s issues in moving customers both to the cloud and migration to S/4HANA by providing a simpler, more cost-effective and risk-mitigated path.

SAP Embrace is intended to provide SAP customers a path to the cloud on Microsoft Azure infrastructure, according to the companies. SAP and Microsoft, along with systems integrators, including Deloitte, Accenture and IBM, will offer SAP customers bundles of cloud services, including unified reference architectures, road maps and market-specific information to help mitigate costs and risks of moving to the cloud. Microsoft field sales teams will sell the SAP Embrace bundles directly to customers and Microsoft will also embed and resell components of SAP Cloud Platform in Azure.

SAP worked with Microsoft, AWS and Google to develop the initial phase of SAP Embrace, but subsequent development over the summer led to the partnership agreement with Microsoft to use Azure as its preferred public cloud provider, said David Robinson, SAP senior vice president and managing director of the cloud business group.

SAP customers are in large measure satisfied that any of the three public cloud providers can handle SAP HANA database workloads and run HANA-based applications, Robinson said, but they are looking for a clear and simple path to the cloud, which was the main goal of SAP Embrace.  

“[Customers] would like to understand that, as they migrate to S/4HANA in conjunction with the lift to the cloud, they can follow a path that leads to the intelligent enterprise and will give the most cost-effective and risk managed journey,” he said.

SAP customers lean toward Azure

David Robinson, senior vice president and managing director of cloud business group, SAPDavid Robinson

The SAP-Microsoft partnership came about mainly because the majority of customers that SAP worked with to validate the SAP Embrace model were already leaning toward Azure, Robinson said.

This was primarily because Microsoft had demonstrated that Azure provided a consistent enterprise degree of engagement and support beyond just the compute network and data store, such as support services and lifecycle management, according to Robinson.

“Microsoft understands the enterprise to speak the enterprise language, and has processes wrapped around their compute network and storage around Azure that are more aligned with what SAP customers need to be able to consume and drive their S/4HANA environment,” he said.

SAP and Microsoft relationship may get cozier

It’s unusual that SAP would go with something that’s relatively exclusive, said Joshua Greenbaum, principal at Enterprise Applications Consulting, but it may be a sign of more to come from SAP and Microsoft.

Joshua Greenbaum, principal, Enterprise Applications ConsultingJoshua Greenbaum

“We know that Microsoft’s SuccessFactors implementation runs on Azure and they’re moving their ERP to Azure, so we know Microsoft wants as many workloads as it can get on Azure and they’re willing to incent SAP to do it,” Greenbaum said. “But I think there’s more to this. There will be another component of this deal coming, because I’m pretty sure that the numbers don’t add up for just this much exclusivity.”

Although the Microsoft Azure deal is not exclusive, the other two hyperscalers were not pleased at the recent addendum to SAP Embrace, Greenbaum said.

He pointed out that Robert Enslin, Google president of cloud sales and former SAP executive, and Thomas Kurian, Google Cloud CEO, are likely tapping their considerable experience to develop enterprise applications.

“It’s pretty clear that they’re going for an apps play that can compete with SAP,” Greenbaum said.

For AWS, on the other hand, Amazon’s relentless expansion into virtually every business may give potential customers pause before they entrust their systems to AWS.

“On the Amazon side there’s a lot of customers — retail, logistics, you name it — where Amazon the mothership is encroaching into a lot of core business areas, so a lot of folks are getting nervous about putting their enterprise software on the Amazon cloud,” he said. “In a way, Amazon sort of backed itself into this position.”

Other public cloud options still available

SAP customers still have the option to deploy S/4HANA and SAP HANA-based applications in any public cloud provider they want, Robinson said.

AWS recently announced new memory upgrades to the EC2 cloud infrastructure designed to manage S/4HANA sized workloads. 

“If a customer still wants to run it on AWS or Google, they can still do it; the support is the same and we continue to certify these workloads on AWS on GCP,” Robinson said. “The difference now is not the support or the ability to run, certify and upgrade — we will always certify that these infrastructures perform on database workloads as we design. But with Microsoft, we’re adding an additional degree of abstraction on top of that around the harmonization of the cloud platform services.”

Go to Original Article
Author:

Set up PowerShell script block logging for added security

PowerShell is an incredibly comprehensive and easy to use language. But administrators need to protect their organization from bad actors who use PowerShell for criminal purposes.

PowerShell’s extensive capabilities as a native tool in Windows make it tempting for an attacker to exploit the language. Increasingly, malicious software and bad actors are using PowerShell to either glue together different attack methods or run exploits entirely through PowerShell.

There are many methods and security best practices available to secure PowerShell, but one of the most valued is PowerShell script block logging. Script blocks are a collection of statements or expressions used as a single unit. Users denote them by everything inside the curly brackets within the PowerShell language.

Starting in Windows PowerShell v4.0 but significantly enhanced in Windows PowerShell v5.0, script block logging produces an audit trail of executed code. Windows PowerShell v5.0 introduced a logging engine that automatically decrypts code that has been obfuscated with methods such as XOR, Base64 and ROT13. PowerShell includes the original encrypted code for comparison.

PowerShell script block logging helps with the postmortem analysis of events to give additional insights if a breach occurs. It also helps IT be more proactive with monitoring for malicious events. For example, if you set up Event Subscriptions in Windows, you can send events of interest to a centralized server for a closer look.

Set up a Windows system for logging

Two primary ways to configure script block logging on a Windows system are by either setting a registry value directly or by specifying the appropriate settings in a group policy object.

To configure script block logging via the registry, use the following code while logged in as an administrator:

New-Item -Path "HKLM:SOFTWAREWow6432NodePoliciesMicrosoftWindowsPowerShellScriptBlockLogging" -Force
Set-ItemProperty -Path "HKLM:SOFTWAREWow6432NodePoliciesMicrosoftWindowsPowerShellScriptBlockLogging" -Name "EnableScriptBlockLogging" -Value 1 -Force

You can set PowerShell logging settings within group policy, either on the local machine or through organizationwide policies.

Open the Local Group Policy Editor and navigate to Computer Configuration > Administrative Templates > Windows Components > Windows PowerShell > Turn on PowerShell Script Block Logging.

Turning on PowerShell script block logging
Set up PowerShell script block logging from the Local Group Policy Editor in Windows.

When you enable script block logging, the editor unlocks an additional option to log events via “Log script block invocation start / stop events” when a command, script block, function or script starts and stops. This helps trace when an event happened, especially for long-running background scripts. This option generates a substantial amount of additional data in your logs.

PowerShell script block logging option
PowerShell script block logging tracks executed scripts and commands run on the command line.

How to configure script block logging on non-Windows systems

PowerShell Core is the cross-platform version of PowerShell for use on Windows, Linux and macOS. To use script block logging on PowerShell Core, you define the configuration in the powershell.config.json file in the $PSHome directory, which is unique to each PowerShell installation.

From a PowerShell session, navigate to $PSHome and use the Get-ChildItem command to see if the powershell.config.json file exists. If not, create the file with this command:

sudo touch powershell.config.json

Modify the file using a tool such as the nano text editor and paste in the following configuration.

{
"PowerShellPolicies": {
"ScriptBlockLogging": {
"EnableScriptBlockInvocationLogging": false,
"EnableScriptBlockLogging": true
}
},
"LogLevel": "verbose"
}

Test PowerShell script block logging

Testing the configuration is easy. From the command line, run the following:

PS /> { "log me!" }
"log me!"

Checking the logs on Windows

How do you know what entries to watch out for? The main event ID to watch out for is 4104. This is the ScriptBlockLogging entry for information that includes user and domain, logged date and time, computer host and the script block text.

Open Event Viewer and navigate to the following log location: Applications and Services Logs > Microsoft > Windows > PowerShell > Operational.

Click on events until you find the one from the test that is listed as Event ID 4104. Filter the log for this event to make the search quicker.

Windows Event 4104
Event 4104 in the Windows Event Viewer details PowerShell activity on a Windows machine.

On PowerShell Core on Windows, the log location is: Applications and Services Logs > PowerShellCore > Operational.

Log location on non-Windows systems

On Linux, PowerShell script block logging will log to syslog. The location will vary based on the distribution. For this tutorial, we use Ubuntu which has syslog at /var/log/syslog.

Run the following command to show the log entry; you must elevate with sudo in this example and on most typical systems:

sudo cat /var/log/syslog | grep "{ log me! }"

2019-08-20T19:40:08.070328-05:00 localhost powershell[9610]: (6.2.2:9:80) [ScriptBlock_Compile_Detail:ExecuteCommand.Create.Verbose] Creating Scriptblock text (1 of 1):#012{ "log me!" }#012#012ScriptBlock ID: 4d8d3cb4-a5ef-48aa-8339-38eea05c892b#012Path:

To set up a centralized server on Linux, things are a bit different since you’re using syslog by default. You can use rsyslog to ship your logs to a log aggregation service to track PowerShell activity from a central location.

Go to Original Article
Author:

Announcing our new podcast: Artificial Intelligence in Education | | Microsoft EDU

Dan Bowen and Ray Fleming, from our Microsoft Australia Education team, have put their voices together to create a new podcast series, the Artificial Intelligence (AI) in Education Podcast. Over the coming weeks they’ll be talking about what artificial intelligence is, and what it  could be used for in schools, colleges and universities. It isn’t a technical conversation, but is intended for educators who are interested to learn more about this often discussed technology wave.

With “the robots are coming to steal our (students’) jobs!” being a hot topic of many education conferences (and, at many business conferences too), Dan and Ray have their feet on the ground and spend their time talking about what’s really happening, and what the technical language means in plain English. As we run further into the podcast series, Ray and Dan will talk about scenarios that are more specific to their areas of expertise – Dan’s a schools specialist, and Ray’s a specialist in higher education. And along the way they will also interview specialists in other areas – personalising learning, accessibility – and how artificial intelligence is intersecting with them and bringing innovation.

Your podcast hosts

  • Ray Fleming is the Higher Education Director for Microsoft Australia, and has spent his career working within the education ICT industry. From working with tertiary education organisations at global and national level, Ray brings insights into the rapid pace of change being seen as digital disruption occurs in other industries, and what might happen next in Australia’s universities.
    In the past Ray’s been an award-winning writer, and columnist for the Times Higher Education. He’s also a regular speaker at higher education conferences in Australia on data-led decision making, the implications of AI for the higher education sector and today’s students, and on digital transformation of industries.
  • Dan is a Technology Strategist working with schools and school systems. He has worked in education as a teacher, governor and school inspector. He also worked in higher education as a blended learning advisor, before moving into Microsoft where he was the product manager for Office 365 in Education, STEM education lead and managed the Minecraft portfolio for Australia (to the delight of his kids). In his current role he is working across Azure, Windows and devices and looking to support schools to drive educational transformation. His interest in Artificial Intelligence comes from both the use cases and implementation as well as the education and enablement of IT to drive this technology. He is interested in the social and ethical uses of AI in Education.

You can find the podcast in Spotify, Apple Podcasts and Google Podcasts, search in your normal podcast app for “AI in Education”, or listen directly on the podcast website at http://aipodcast.education 

Click here for free STEM resourcesExplore tools for student-centered learning

Go to Original Article
Author: Microsoft News Center

Know your Office 365 backup options — just in case

Exchange administrators who migrate their email to Office 365 reduce their infrastructure responsibilities, but they must not ignore areas related to disaster recovery, security, compliance and email availability.

Different businesses rely on different applications for their day-to-day operations. Healthcare companies use medical records to treat patients or a manufacturing plant needs its ERP system to track production. But generally speaking, most businesses, regardless of their vertical, rely on email to communicate with their co-workers and customers. If the messaging platform goes down for any amount of time, users and the business suffer. A move to Microsoft’s cloud-based collaboration platform introduces new administrative challenges, such as determining whether the organization needs an Office 365 backup product.

IT pros tasked with all things related to Exchange Server administration — managing multiple email services, including system uptime; mailbox recoverability; system performance; maintenance; user setups; and general reactive system issues — will have to adjust when they move to Office 365. Many of the responsibilities related to system performance, maintenance and uptime become the responsibility of Microsoft. Unfortunately, not all of these outsourced activities meet the expectations of Exchange administrators. Some of them will resort to alternative methods to ensure their systems have the right protections to avoid serious disasters.

A move to Microsoft’s cloud-based collaboration platform introduces new administrative challenges, such as determining whether the organization needs an Office 365 backup product.

To keep on-premises Exchange running with high uptime, Exchange admins rely on setting up the environment with adequate redundancies, such as virtualization with high availability, clustering and proper backup if a recovery is required. In a hosted Exchange model with Office 365, email administrators rely heavily on the hosting provider to manage those redundancies and ensure system uptime. However, despite the promised service-level agreements (SLAs) by Microsoft, there are still some gaps that Exchange administrators must plan for to get the same level of system availability and data protection they previously experienced with their legacy on-premises Exchange platform.

Hosted email in Exchange Online, which can be purchased as a stand-alone service or as part of Office 365, has certainly attracted many companies. Microsoft did not provide exact numbers in its most recent quarterly report, but it is estimated to be around 180 million Office 365 commercial seats. Despite the popularity of the platform, one would assume Microsoft would offer an Office 365 backup option at minimum for the email service. Microsoft does, but not in the way Exchange administrators know backup and disaster recovery.

Microsoft does not have backups for Exchange Online

Microsoft provides some level of recoverability with mailboxes stored in Exchange Online. If a user loses email, then the Exchange administrator can restore deleted email by restoring an entire mailbox with PowerShell or through the Outlook recycle bin.

The Undo-SoftDeletedMailbox PowerShell command recovers the deleted mailbox, but there are some limitations. The command is only useful when a significant number of folders have been deleted from a mailbox and the recovery attempt occurs within 30 days. After 30 days, the content is not recoverable.

Due to this limited backup functionality, many administrators look to third-party Office 365 backup vendors such as SkyKick, BitTitan, Datto and Veeam to expand their backup and recovery needs beyond the 30 days that Microsoft offers. At the moment, this is the only way for Exchange administrators to satisfy their organization’s back up and disaster recovery requirements.

Microsoft promises 99.9% uptime with email

No cloud provider is immune to outages and Microsoft is no different. Despite instances of service loss, Microsoft guarantees at least 99.9% uptime for Office 365. This SLA translates into no more than nine hours of downtime per year.

For most IT executives, this guarantee does not absolve them of the need to plan for possible downtime. Administrators should investigate the costs and the technical abilities of an email continuity service from vendors, including Mimecast, Barracuda or TitanHQ, to avoid trouble from unplanned outages.

Email retention policies can go a long way for sensitive content

The ability to define different type of data access and retention policies is just as important as backup and disaster recovery for organizations with compliance requirements.

Groups that need to prevent accidental email deletion will need to work with the Office 365 administrator to set up the appropriate on-hold policies or archiving configuration to protect that content. These are native features in Exchange Online that administrators must build their familiarity to ensure they understand how to meet the different legal requirements of the different groups in their organization.

Define backup retention policies to meet business needs

For most backup offerings for on-premises Exchange, storage is always a concern for administrators. Since it is generally the dictating factor behind the retention period of email backup, Exchange admins have to keep disk space in mind when they determine the best backup scheme for their organization. Hourly, daily, weekly, monthly and quarterly backup schedules are influenced by the amount of available storage.

Office 365 backup products for email from vendors such as SkyKick, Dropsuite, Acronis and Datto ease the concerns related to storage space. This gives the administrator a way to develop the best protection scheme for their company without the added worry of wondering when to purchase additional storage hardware to accommodate these backups.

Go to Original Article
Author: