Tag Archives: past

Oracle looks to grow multi-model database features

Perhaps no single vendor or database platform over the past three decades has been as pervasive as the Oracle database.

Much as the broader IT market has evolved, so too has Oracle’s database. Oracle has added new capabilities to meet changing needs and competitive challenges. With a move toward the cloud, new multi-model database options and increasing automation, the modern Oracle database continues to move forward. Among the executives who have been at Oracle the longest is Juan Loaiza, executive vice president of mission critical database technologies, who has watched the database market evolve, first-hand, since 1988.

In this Q&A, Loaiza discusses the evolution of the database market and how Oracle’s namesake database is positioned for the future.

Why have you stayed at Oracle for more than three decades and what has been the biggest change you’ve seen over that time?

Juan LoaizaJuan Loaiza

Juan Loaiza: A lot of it has to do with the fact that Oracle has done well. I always say Oracle’s managed to stay competitive and market-leading with good technology.

Oracle also pivots very quickly when needed. How do you survive for 40 years? Well, you have to react and lead when technology changes.

Decade after decade, Oracle continues to be relevant in the database market as it pivots to include an expanding list of capabilities to serve users.

The big change that happened a little over a year ago is that Thomas Kurian [former president of product development] left Oracle. He was head of all development and when he left what happened is that some of the teams, like database and apps, ended rolling up to [Oracle founder and CTO] Larry Ellison. Larry is now directly managing some of the big technology teams. For example, I work directly with Larry.

What is your view on the multi-model database approach?

Loaiza: This is something we’re starting to talk more about. So the term that people use is multi-model but we’re using a different term, we’re using a term called converged database and the reason for that is because multi-model is kind of one component of it.

Multi-model really talks about different data models that you can model inside the database, but we’re also doing much more than that. Blockchain is an example of converging technology that is not even thought about normally as database technology into the database. So we’re going well beyond the conventional kind of multi-model of, Hey, I can do this, data format, and that data format.

Initially, the relational database was the mainstream database people used for both OLTP [online transaction processing] and analytics. What has happened in the last 10 to 15 years is that there have been a lot of new database technologies to come around, things like NoSQL, JSON, document databases, databases for geospatial data and graph databases too. So there’s a lot of specialty databases that have come around. What’s happening is, people are having to cobble together a complex kind of web of databases to solve one problem and that creates an enormous amount of complexity.

With the idea of a converged database, we’re taking all the good ideas, whether it’s NoSQL, blockchain or graph, and we’re building it into the Oracle database. So you can basically use one data store and write your application to that.

The analogy that we use is that of a smartphone. We used to have a music device and a phone device and a calendar device and a GPS device and all these things and what’s happened is they’ve all been converged into a smartphone.

Are companies actually shifting their on-premises production database deployments to the cloud?

Loaiza: There’s definitely a switch to the cloud. There are two models to cloud; one is kind of the grassroots. So we’re seeing some of that, for example, with our autonomous database that people are using now. So they’re like, ‘Hey, I’m in the finance department, and I need a reporting database,’ or, ‘hey, I’m in the marketing department, and I need some database to run some campaign with.’ So that’s kind of a grassroots and those guys are building a new thing and they want to just go to cloud. It’s much easier and much quicker to set up a database and much more agile to go to the cloud.

The second model is where somebody up in the hierarchy says, ‘Hey, we have a strategy to move to cloud.’ Some companies want to move quickly and some companies say, ‘Hey, you know, I’m going to take my time,’ and there’s everything in the middle.

Will autonomous database technology mean enterprises will need fewer database professionals?

Loaiza: The autonomous database addresses the mundane aspects of running a database. Things like tuning the database, installing it, configuring it, setting up HA [high availability], among other tasks. That doesn’t mean that there’s nothing for database professionals to do.

Like every other field where there is automation, what you do is you move upstream, you say, ‘Hey, I’m going to work on machine learning or analytics or blockchain or security.’ There’s a lot of different aspects of data management that require a lot of labor.

One of the nice things that we have in this industry is there is no real unemployment crisis in IT. There’s a lot of unfilled jobs.

So it’s pretty straightforward for someone who has good skills in data management to just move upstream and do something that’s going to add more specific value then just configuring and setting up databases, which is really more of a mechanical process.

This interview has been edited for clarity and conciseness.

Go to Original Article
Author:

Tallying the momentous growth and continued expansion of Dynamics 365 and the Power Platform – The Official Microsoft Blog

We’ve seen incredible growth of Dynamics 365 and the Power Platform just in the past year. This momentum is driving a massive investment in people and breakthrough technologies that will empower organizations to transform in the next decade.

We have allocated hundreds of millions of dollars in our business cloud that power business transformation across markets and industries and help organizations solve difficult problems.

This fiscal year we are also heavily investing in the people that bring Dynamics 365 and the Power Platform to life — a rapidly growing global network of experts, from engineers and researchers to sales and marketing professionals. Side-by-side with our incredible partner community, the people that power innovation at Microsoft will fuel transformational experiences for our customers into the next decade.

Accelerating innovation across industries

In every industry, I hear about the struggle to transform from a reactive to proactive organization that can respond to changes in the market, customer needs, and even within their own business. When I talk to customers who have rolled out Dynamics 365 and the Power Platform, the conversation shifts to the breakthrough outcomes they’ve achieved, often in very short time frames.

Customers talk about our unique ability to connect data holistically across departments and teams — with AI-powered insights to drive better outcomes. Let me share a few examples.

This year we’ve focused on a new vision for retail that unifies back office, in-store and digital experiences. One of Washington state’s founding wineries — Ste. Michelle Wine Estates — is onboarding Dynamics 365 Commerce to bridge physical and digital channels, streamline operations with cloud intelligence and continue building brand loyalty with hyper-personalized customer experiences.

When I talk to manufacturers, we often zero in on ways to bring more efficiency to the factory floor and supply chain. Again, it’s our ability to harness data from physical and digital worlds, reason over it with AI-infused insights, that opens doors to new possibilities. For example, Majans, the Australian-based snackfood company, is creating the factory of the future with the help of Microsoft Dynamics 365 Supply Chain Management, Power BI and Azure IoT Hub — bringing Internet of Things (IoT) intelligence to every step in the supply chain, from quality control on the production floor to key performance indicators to track future investments. When everyone relies on a single source of truth about production, inventory and sales performance, decisions employees make drive the same outcome — all made possible on our connected business cloud.

These connected experiences extend to emerging technologies that bridge digital and physical worlds, such as our investment in mixed reality. We’re working with companies like PACCAR — manufacturer of premium trucks — to improve manufacturing productivity and employee training using Dynamics 365 Guides and HoloLens 2, as well as Siemens to enable technicians to service its eHighway — an electrified freight transport system — by completing service steps with hands-free efficiency using HoloLens and two-way communication and documentation in Dynamics 365 Field Service.

For many of our customers, the journey to Dynamics 365 and the Power Platform started with a need for more personalized customer experiences. Our customer data platform (CDP) featuring Dynamics 365 Customer Insights, is helping Tivoli Gardens — one of the world’s longest-running amusement parks — personalize guest experiences across every touchpoint — on the website, at the hotel and in the park.  Marston’s has onboarded Dynamics 365 Sales and Customer Insights to unify guest data and infuse personalized experiences across its 1,500-plus pubs across the U.K.

The value of Dynamics 365 is compounded when coupled with the Power Platform. In late 2019, there are over 3 million monthly active developers on the Power Platform, from non-technical “citizen developers” to Microsoft partners developing world-class, customized apps. In the last year, we’ve seen a 700% growth in Power Apps production apps and a 300% growth in monthly active users. All of those users generate a ton of data, with more than 25 billion Power Automate steps run each day and 25 million data models hosted in the Power BI service.

The impact of the Power Platform is shared in the stories our customers share with us. TruGreen, one of the largest lawn care companies in the U.S., onboarded Dynamics 365 Customer Insights and the Microsoft Power Platform to provide more proactive and predictive services to customers, freeing employees to spend more time on higher value tasks and complex customer issue resolution. And the American Red Cross is leveraging Power Platform integration with Teams to improve disaster response times.

From the Fortune 500 companies below to the thousands of small and medium sized businesses, city and state governments, schools and colleges and nonprofit organizations — Dynamics 365 and the Microsoft Cloud are driving transformative success delivering on business outcomes.

24 business logos of Microsoft partners

Partnering to drive customer success

We can’t talk about growth and momentum of Dynamics 365 and Power Platform without spotlighting our partner community — from ISVs to System Integrators that are the lifeblood of driving scale for our business. We launched new programs, such as the new ISV Connect Program, to help partners get Dynamics 365 and Power Apps solutions to market faster.

Want to empower the next generation of connected cloud business? Join our team!

The incredible momentum of Dynamics 365 and Power Platform means our team is growing, too. In markets around the globe, we’re looking for people who want to make a difference and take their career to the next level by helping global organizations digitally transform on Microsoft Dynamics 365 and the Power Platform. If you’re interested in joining our rapidly growing team, we’re hiring across a wealth of disciplines, from engineering to technical sales, in markets across the globe. Visit careers.microsoft.com to explore business applications specialist career opportunities.

Tags: , ,

Go to Original Article
Author: Microsoft News Center

For Sale – Mac Mini 2011 i7 FAULTY

My mac mini has developed a fault with (I believe) the dedicated GPU (see photo). It doesn’t get past the boot up screen.

I personally don’t have the time (or inclination) to want to try to fix this.

The spec of this machine is:

  • Mac Mini mid-2011
  • 2.7GHz dual-core Intel Core i7
  • 8GB RAM (2 x 4GB sticks)
  • 500GB hard drive
  • AMD Radeon HD 6630M graphics processor

As far as I can tell, it’s only the GPU that’s causing problems, all the ports, wifi, bluetooth all work.

This is being sold as NOT WORKING and therefore no returns accepted, it might be right for someone who has the time and tools to attempt a fix. Note that as I couldn’t get past the boot screen I have opened this up to get the drive out to recover data and then wipe it.

Please do ask questions if you’re interested.

Go to Original Article
Author:

For Sale – Mac Mini 2011 i7 FAULTY

My mac mini has developed a fault with (I believe) the dedicated GPU (see photo). It doesn’t get past the boot up screen.

I personally don’t have the time (or inclination) to want to try to fix this.

The spec of this machine is:

  • Mac Mini mid-2011
  • 2.7GHz dual-core Intel Core i7
  • 8GB RAM (2 x 4GB sticks)
  • 500GB hard drive
  • AMD Radeon HD 6630M graphics processor

As far as I can tell, it’s only the GPU that’s causing problems, all the ports, wifi, bluetooth all work.

This is being sold as NOT WORKING and therefore no returns accepted, it might be right for someone who has the time and tools to attempt a fix. Note that as I couldn’t get past the boot screen I have opened this up to get the drive out to recover data and then wipe it.

Please do ask questions if you’re interested.

Go to Original Article
Author:

For Sale – Mac Mini 2011 i7 FAULTY

My mac mini has developed a fault with (I believe) the dedicated GPU (see photo). It doesn’t get past the boot up screen.

I personally don’t have the time (or inclination) to want to try to fix this.

The spec of this machine is:

  • Mac Mini mid-2011
  • 2.7GHz dual-core Intel Core i7
  • 8GB RAM (2 x 4GB sticks)
  • 500GB hard drive
  • AMD Radeon HD 6630M graphics processor

As far as I can tell, it’s only the GPU that’s causing problems, all the ports, wifi, bluetooth all work.

This is being sold as NOT WORKING and therefore no returns accepted, it might be right for someone who has the time and tools to attempt a fix. Note that as I couldn’t get past the boot screen I have opened this up to get the drive out to recover data and then wipe it.

Please do ask questions if you’re interested.

Go to Original Article
Author:

These innovations are driving collaboration in the Cascadia region | Microsoft On The Issues

As far as enviable commutes go, a short hop in a seaplane, flying over water and past snow-capped mountains, is up there.

Connecting Seattle and Vancouver, a recently launched flight route is testament to the growing ties between the locations.

The two-way trading relationship between Canada and the United States remains one of the largest in the world – and the links between British Columbia and Washington state are growing. In 2016, the launch of the Cascadia Innovation Corridor formalized the connection. And a July 2019 study also found that a high-speed rail line connecting Vancouver, Seattle and Portland could bring $355 billion in economic growth in the region.

Here are a few of the ways this region is coming together.

[Subscribe to Microsoft on the Issues for more on the topics that matter most.]

Innovation at scale

Microsoft, along with many other business, academic and research institutions, has been working to maximize the opportunities the corridor presents – and the Canadian Digital Technology Supercluster consortium is one example.

Bringing together names in tech, healthcare and natural resources, this consortium hopes to advance technologies by developing innovation and talent. It will also be a boon to the local economy, with the goal of creating 50,000 B.C. jobs over the next 10 years, fuelling growth across multiple sectors and expanding opportunity across the region.

A meeting of minds

Home to some of the world’s leading research and medical organizations, the Cascadia region is also aiming to become a global leader in biomedical data science and health technology innovation.

Stock image of people working in technology

Accelerating cancer research has been a key target. Working in collaboration with the Fred Hutchinson Cancer Research Center, Microsoft has established the Cascadia Data Discovery Initiative, which is tackling the barriers that make research breakthroughs difficult, such as data discovery and access.

Microsoft’s partnership with BC Cancer is taking another approach to finding a cure for the disease. Using Azure, scientists can collaboratively analyze vast amounts of data, accelerating the pace of research. Interns from the Microsoft Garage program have been working to take this a step further, using the HoloLens platform to create mixed reality tools to help researchers visualize the structure of a tumor.

Inspiring the next generation

Work is also happening at the grass-roots level, helping to create the next generation of graduates ready to build the technologies of the future. Through a partnership with Microsoft, the British Columbia Institute of Technology is delivering a first-of-its-kind mixed-reality curriculum, with the goal of training students for jobs in digital media and entertainment along the Cascadia Corridor.

British Columbia students are also benefiting from a Microsoft initiative to help high schools build computer science programs. The TEALS program first started in Washington state in 2009 and recently expanded to B.C. It pairs computer science professionals with teachers, giving schools the training and support to help their students build skills for in-demand local careers.

A lesson for others

The Cascadia Corridor is already helping Vancouver, Seattle and the region achieve more than they could do independently.

A steering committee established at the end of 2018 will help build on the economic opportunities, growing human capital in the region, investing in and expanding transport and infrastructure, and helping to foster an ecosystem that encourages innovation.

For more on the Cascadia Corridor and other Microsoft work follow @MSFTIssues on Twitter.

Go to Original Article
Author: Microsoft News Center

What is the Hyper-V Core Scheduler?

In the past few years, sophisticated attackers have targeted vulnerabilities in CPU acceleration techniques. Cache side-channel attacks represent a significant danger. They magnify on a host running multiple virtual machines. One compromised virtual machine can potentially retrieve information held in cache for a thread owned by another virtual machine. To address such concerns, Microsoft developed its new “HyperClear” technology pack. HyperClear implements multiple mitigation strategies. Most of them work behind the scenes and require no administrative effort or education. However, HyperClear also includes the new “core scheduler”, which might need you to take action.

The Classic Scheduler

Now that Hyper-V has all new schedulers, its original has earned the “classic” label. I wrote an article on that scheduler some time ago. The advanced schedulers do not replace the classic scheduler so much as they hone it. So, you need to understand the classic scheduler in order to understand the core scheduler. A brief recap of the earlier article:

  • You assign a specific number of virtual CPUs to a virtual machine. That sets the upper limit on how many threads the virtual machine can actively run.
  • When a virtual machine assigns a thread to a virtual CPU, Hyper-V finds the next available logical processor to operate it.

To keep it simple, imagine that Hyper-V assigns threads in round-robin fashion. Hyper-V does engage additional heuristics, such as trying to keep a thread with its owned memory in the same NUMA node. It also knows about simultaneous multi-threading (SMT) technologies, including Intel’s Hyper-Threading and AMD’s recent advances. That means that the classic scheduler will try to place threads where they can get the most processing power. Frequently, a thread shares a physical core with a completely unrelated thread — perhaps from a different virtual machine.

Risks with the Classic Scheduler

The classic scheduler poses a cross-virtual machine data security risk. It stems from the architectural nature of SMT: a single physical core can run two threads but has only one cache.

Classic SchedulerIn my research, I discovered several attacks in which one thread reads cached information belonging to the other. I did not find any examples of one thread polluting the others’ data. I also did not see anything explicitly preventing that sort of assault.

On a physically installed operating system, you can mitigate these risks with relative ease by leveraging antimalware and following standard defensive practices. Software developers can make use of fencing techniques to protect their threads’ cached data. Virtual environments make things harder because the guest operating systems and binary instructions have no influence on where the hypervisor places threads.

The Core Scheduler

The core scheduler makes one fairly simple change to close the vulnerability of the classic scheduler: it never assigns threads from more than one virtual machine to any physical core. If it can’t assign a second thread from the same VM to the second logical processor, then the scheduler leaves it empty. Even better, it allows the virtual machine to decide which threads can run together.

Hyper-V Core Scheduler

We will move on through implementation of the scheduler before discussing its impact.

Implementing Hyper-V’s Core Scheduler

The core scheduler has two configuration points:

  1. Configure Hyper-V to use the core scheduler
  2. Configure virtual machines to use two threads per virtual core

Many administrators miss that second step. Without it, a VM will always use only one logical processor on its assigned cores. Each virtual machine has its own independent setting.

We will start by changing the scheduler. You can change the scheduler at a command prompt (cmd or PowerShell) or by using Windows Admin Center.

How to Use the Command Prompt to Enable and Verify the Hyper-V Core Scheduler

For Windows and Hyper-V Server 2019, you do not need to do anything at the hypervisor level. You still need to set the virtual machines. For Windows and Hyper-V Server 2016, you must manually switch the scheduler type.

You can make the change at an elevated command prompt (PowerShell prompt is fine):

Note: if bcdedit does not accept the setting, ensure that you have patched the operating system.

Reboot the host to enact the change. If you want to revert to the classic scheduler, use “classic” instead of “core”. You can also select the “root” scheduler, which is intended for use with Windows 10 and will not be discussed further here.

To verify the scheduler, just run bcdedit by itself and look at the last line:

bcdedit

bcdedit will show the scheduler type by name. It will always appear, even if you disable SMT in the host’s BIOS/UEFI configuration.

How to Use Windows Admin Center to Enable the Hyper-V Core Scheduler

Alternatively, you can use Windows Admin Center to change the scheduler.

  1. Use Windows Admin Center to open the target Hyper-V host.
  2. At the lower left, click Settings. In most browsers, it will hide behind any URL tooltip you might have visible. Move your mouse to the lower left corner and it should reveal itself.
  3. Under Hyper-V Host Settings sub-menu, click General.
  4. Underneath the path options, you will see Hypervisor Scheduler Type. Choose your desired option. If you make a change, WAC will prompt you to reboot the host.

windows admin center

Note: If you do not see an option to change the scheduler, check that:

  • You have a current version of Windows Admin Center
  • The host has SMT enabled
  • The host runs at least Windows Server 2016

The scheduler type can change even if SMT is disabled on the host. However, you will need to use bcdedit to see it (see previous sub-section).

Implementing SMT on Hyper-V Virtual Machines

With the core scheduler enabled, virtual machines can no longer depend on Hyper-V to make the choice to use a core’s second logical processor. Hyper-V will expect virtual machines to decide when to use the SMT capabilities of a core. So, you must enable or disable SMT capabilities on each virtual machine just like you would for a physical host.

Because of the way this technology developed, the defaults and possible settings may seem unintuitive. New in 2019, newly-created virtual machines can automatically detect the SMT status of the host and hypervisor and use that topology. Basically, they act like a physical host that ships with Hyper-Threaded CPUs — they automatically use it. Virtual machines from previous versions need a bit more help.

Every virtual machine has a setting named “HwThreadsPerCore”. The property belongs to the Msvm_ProcessorSettingData CIM class, which connects to the virtual machine via its Msvm_Processor associated instance. You can drill down through the CIM API using the following PowerShell (don’t forget to change the virtual machine name):

The output of the cmdlet will present one line per virtual CPU. If you’re worried that you can only access them via this verbose technique hang in there! I only wanted to show you where this information lives on the system. You have several easier ways to get to and modify the data. I want to finish the explanation first.

The HwThreadsPerCore setting can have three values:

  • 0 means inherit from the host and scheduler topology — limited applicability
  • 1 means 1 thread per core
  • 2 means 2 threads per core

The setting has no other valid values.

A setting of 0 makes everything nice and convenient, but it only works in very specific circumstances. Use the following to determine defaults and setting eligibility:

  • VM config version < 8.0
    • Setting is not present
    • Defaults to 1 if upgraded to VM version 8.x
    • Defaults to 0 if upgraded to VM version 9.0+
  • VM config version 8.x
    • Defaults to 1
    • Cannot use a 0 setting (cannot inherit)
    • Retains its setting if upgraded to VM version 9.0+
  • VM config version 9.x
    • Defaults to 0

I will go over the implications after we talk about checking and changing the setting.

You can see a VM’s configuration version in Hyper-V Manager and PowerShell’s Get-VM :

Hyper-V Manager

The version does affect virtual machine mobility. I will come back to that topic toward the end of the article.

How to Determine a Virtual Machine’s Threads Per Core Count

Fortunately, the built-in Hyper-V PowerShell module provides direct access to the value via the *-VMProcessor cmdlet family. As a bonus, it simplifies the input and output to a single value. Instead of the above, you can simply enter:

If you want to see the value for all VMs:

You can leverage positional parameters and aliases to simplify these for on-the-fly queries:

You can also see the setting in recent version of Hyper-V Manager (Windows Server 2019 and current versions of Windows 10). Look on the NUMA sub-tab of the Processor tab. Find the Hardware threads per core setting:

settings

In Windows Admin Center, access a virtual machine’s Processor tab in its settings. Look for Enable Simultaneous Multithreading (SMT).

processors

If the setting does not appear, then the host does not have SMT enabled.

How to Set a Virtual Machine’s Threads Per Core Count

You can easily change a virtual machine’s hardware thread count. For either the GUI or the PowerShell commands, remember that the virtual machine must be off and you must use one of the following values:

  • 0 = inherit, and only works on 2019+ and current versions of Windows 10 and Windows Server SAC
  • 1 = one thread per hardware core
  • 2 = two threads per hardware core
  • All values above 2 are invalid

To change the setting in the GUI or Windows Admin Center, access the relevant tab as shown in the previous section’s screenshots and modify the setting there. Remember that Windows Admin Center will hide the setting if the host does not have SMT enabled. Windows Admin Center does not allow you to specify a numerical value. If unchecked, it will use a value of 1. If checked, it will use a value of 2 for version 8.x VMs and 0 for version 9.x VMs.

To change the setting in PowerShell:

To change the setting for all VMs in PowerShell:

Note on the cmdlet’s behavior: If the target virtual machine is off, the setting will work silently with any valid value. If the target machine is on and the setting would have no effect, the cmdlet behaves as though it made the change. If the target machine is on and the setting would have made a change, PowerShell will error. You can include the -PassThru parameter to receive the modified vCPU object:

Considerations for Hyper-V’s Core Scheduler

I recommend using the core scheduler in any situation that does not explicitly forbid it. I will not ask you to blindly take my advice, though. The core scheduler’s security implications matter, but you also need to think about scalability, performance, and compatibility.

Security Implications of the Core Scheduler

This one change instantly nullifies several exploits that could cross virtual machines, most notably in the Spectre category. Do not expect it to serve as a magic bullet, however. In particular, remember that an exploit running inside a virtual machine can still try to break other processes in the same virtual machine. By extension, the core scheduler cannot protect against threats running in the management operating system. It effectively guarantees that these exploits cannot cross partition boundaries.

For the highest level of virtual machine security, use the core scheduler in conjunction with other hardening techniques, particularly Shielded VMs.

Scalability Impact of the Core Scheduler

I have spoken with one person who was left with the impression that the core scheduler does not allow for oversubscription. They called into Microsoft support, and the engineer agreed with that assessment. I reviewed Microsoft’s public documentation as it was at the time, and I understand how they reached that conclusion. Rest assured that you can continue to oversubscribe CPU in Hyper-V. The core scheduler prevents threads owned by separate virtual machines from running simultaneously on the same core. When it starts a thread from a different virtual machine on a core, the scheduler performs a complete context switch.

You will have some reduced scalability due to the performance impact, however.

Performance Impact of the Core Scheduler

On paper, the core scheduler presents severe deleterious effects on performance. It reduces the number of possible run locations for any given thread. Synthetic benchmarks also show a noticeable performance reduction when compared to the classic scheduler. A few points:

  • Generic synthetic CPU benchmarks drive hosts to abnormal levels using atypical loads. In simpler terms, they do not predict real-world outcomes.
  • Physical hosts with low CPU utilization will experience no detectable performance hits.
  • Running the core scheduler on a system with SMT enabled will provide better performance than the classic scheduler on the same system with SMT disabled

Your mileage will vary. No one can accurately predict how a general-purpose system will perform after switching to the core scheduler. Even a heavily-laden processor might not lose anything. Remember that, even in the best case, an SMT-enabled core will not provide more than about a 25% improvement over the same core with SMT disabled. In practice, expect no more than a 10% boost. In the simplest terms: switching from the classic scheduler to the core scheduler might reduce how often you enjoy a 10% boost from SMT’s second logical processor. I expect few systems to lose much by switching to the core scheduler.

Some software vendors provide tools that can simulate a real-world load. Where possible, leverage those. However, unless you dedicate an entire host to guests that only operate that software, you still do not have a clear predictor.

Compatibility Concerns with the Core Scheduler

As you saw throughout the implementation section, a virtual machine’s ability to fully utilize the core scheduler depends on its configuration version. That impacts Hyper-V Replica, Live Migration, Quick Migration, virtual machine import, backup, disaster recovery, and anything else that potentially involves hosts with mismatched versions.

Microsoft drew a line with virtual machine version 5.0, which debuted with Windows Server 2012 R2 (and Windows 8.1). Any newer Hyper-V host can operate virtual machines of its version all the way down to version 5.0. On any system, run  Get-VMHostSupportedVersion to see what it can handle. From a 2019 host:

So, you can freely move version 5.0 VMs between a 2012 R2 host and a 2016 host and a 2019 host. But, a VM must be at least version 8.0 to use the core scheduler at all. So, when a v5.0 VM lands on a host running the core scheduler, it cannot use SMT. I did not uncover any problems when testing an SMT-disabled guest on an SMT-enabled host or vice versa. I even set up two nodes in a cluster, one with Hyper-Threading on and the other with Hyper-Threading off, and moved SMT-enabled and SMT-disabled guests between them without trouble.

The final compatibility verdict: running old virtual machine versions on core-scheduled systems means that you lose a bit of density, but they will operate.

Summary of the Core Scheduler

This is a lot of information to digest, so let’s break it down to its simplest components. The core scheduler provides a strong inter-virtual machine barrier against cache side-channel attacks, such as the Spectre variants. Its implementation requires an overall reduction in the ability to use simultaneous multi-threaded (SMT) cores. Most systems will not suffer a meaningful performance penalty. Virtual machines have their own ability to enable or disable SMT when running on a core-scheduled system. All virtual machine versions prior to 8.0 (WS2016/W10 Anniversary) will only use one logical processor per core when running on a core-scheduled host.

Go to Original Article
Author: Eric Siron

For Sale – ASUS ROG GTX980 Poseidon Platinum

Selling my GTX980 as i rarely use my PC for gaming anymore . Its played everything I have chucked at it in the past and is still a brilliant card. It has the hybrid cooler, I’ve always ran it on air but it can be used in a proper water cooled rig. Just removed from PC today

Here’s the official Asus page :- POSEIDON-GTX980-P-4GD5 | Graphics Cards | ASUS United Kingdom
And there are plenty of reviews to back up the performance .

Price is £150 plus postage, its a good size box and quite hefty . I could deliver if you are local

gpuz.jpg

[​IMG]

[​IMG]

[​IMG]

[​IMG]

[​IMG]

Price and currency: £150
Delivery: Delivery cost is not included
Payment method: BT,Cash if being delivered
Location: Holywell , Tyne and Wear
Advertised elsewhere?: Not advertised elsewhere
Prefer goods collected?: I have no preference

______________________________________________________
This message is automatically inserted in all classifieds forum threads.
By replying to this thread you agree to abide by the trading rules detailed here.
Please be advised, all buyers and sellers should satisfy themselves that the other party is genuine by providing the following via private conversation to each other after negotiations are complete and prior to dispatching goods and making payment:

  • Landline telephone number. Make a call to check out the area code and number are correct, too
  • Name and address including postcode
  • Valid e-mail address

DO NOT proceed with a deal until you are completely satisfied with all details being correct. It’s in your best interest to check out these details yourself.

Cuphead Goes Triple Platinum! – | Studio MDHR

Hello friends! We’re barely past the halfway mark of 2018, and are humbled to announce that Cuphead has already reached over 3 million copies sold! It’s hard for us to express just how appreciative we are to everyone who has played and enjoyed our niche little run & gun game with the wacky rubber-hose characters.

To celebrate this huge milestone, we’re putting Cuphead on sale on Steam and Xbox for the next couple of days! So if you haven’t had a chance to wallop the Devil, it’s a good day for a swell battle!

We’ll also be marking the occasion here at Studio MDHR with some extra-special giveaways over the next few days. Keep an eye out on Twitter or Facebook for some fun surprises.

Looking back, 2018 has been a very exciting year for us.

The big highlight, of course, was getting the chance to pull the curtain back on the project we’ve been working on since the original game’s release: our upcoming DLC, The Delicious Last Course!

We’ve also been so fortunate to be able to share in the adventure with some amazing people. We met the legendary animator James Baxter at this year’s Annie Awards in February, and somehow managed to convince him to do an on-stage animation collaboration with us at E3!

In June, we cheered on the amazing Mexican Runner as he delivered a lightning fast and hilariously entertaining speedrun at Summer Games Done Quick 2018. It’s such a thrill to see people still finding new and exciting ways to explore the Inkwell Isles.

So while we’ll mostly have our pencils to paper for the rest of the year, we still have a couple little tricks up our sleeve before the end of 2018. Stay tuned to our social channels to be the first to know! Hi dee ho!

Cuphead Goes Triple Platinum! – | Studio MDHR

Hello friends! We’re barely past the halfway mark of 2018, and are humbled to announce that Cuphead has already reached over 3 million copies sold! It’s hard for us to express just how appreciative we are to everyone who has played and enjoyed our niche little run & gun game with the wacky rubber-hose characters.

To celebrate this huge milestone, we’re putting Cuphead on sale on Steam and Xbox for the next couple of days! So if you haven’t had a chance to wallop the Devil, it’s a good day for a swell battle!

We’ll also be marking the occasion here at Studio MDHR with some extra-special giveaways over the next few days. Keep an eye out on Twitter or Facebook for some fun surprises.

Looking back, 2018 has been a very exciting year for us.

The big highlight, of course, was getting the chance to pull the curtain back on the project we’ve been working on since the original game’s release: our upcoming DLC, The Delicious Last Course!

We’ve also been so fortunate to be able to share in the adventure with some amazing people. We met the legendary animator James Baxter at this year’s Annie Awards in February, and somehow managed to convince him to do an on-stage animation collaboration with us at E3!

In June, we cheered on the amazing Mexican Runner as he delivered a lightning fast and hilariously entertaining speedrun at Summer Games Done Quick 2018. It’s such a thrill to see people still finding new and exciting ways to explore the Inkwell Isles.

So while we’ll mostly have our pencils to paper for the rest of the year, we still have a couple little tricks up our sleeve before the end of 2018. Stay tuned to our social channels to be the first to know! Hi dee ho!