For Sale – HP Envy x360

Discussion in ‘Laptop, Notebook & Macbook Classifieds‘ started by NptonSaintsFan, Jul 18, 2019.

  1. NptonSaintsFan

    NptonSaintsFan

    Novice Member

    Joined:
    Jul 3, 2019
    Messages:
    7
    Products Owned:
    0
    Products Wanted:
    0
    Trophy Points:
    1
    Location:
    Northants
    Ratings:
    +0

    Bought this laptop from

    here last week. having used for an hour or so it is simply too small for my needs so moving it on details below

    The remarkable versatility of the 33.8 cm (13.3″) ENVY x360 PC gives you the freedom to go anywhere life takes you. With the powerful AMD processor and long battery life, it delivers ample power in a slim and sleek design that’s easily portable. Enhanced privacy frees you up to do more on-the-go.

    • Windows 10 Home 64
    • AMD Ryzen™ 7 2700U (2 GHz base frequency, up to 3.8 GHz burst frequency, 6 MB cache, 4 cores)
    • 33.8 cm (13.3″) diagonal Full-HD (1920 x 1080) IPS micro-edge touchscreen display with Corning® Gorilla® Glass NBT™
    • 8 GB memory; 512 GB PCIe® NVMe™ M.2 SSD
    • AMD Radeon™ RX Vega 10 Graphics; Windows Hello webcam; Backlit keyboard; HP Fast Charge; B&O Audio

    Price and currency: 600
    Delivery: Delivery cost is not included
    Payment method: BT / cash on collection
    Location: Northants
    Advertised elsewhere?: Advertised elsewhere
    Prefer goods collected?: I have no preference

    ______________________________________________________
    This message is automatically inserted in all classifieds forum threads.
    By replying to this thread you agree to abide by the trading rules detailed here.
    Please be advised, all buyers and sellers should satisfy themselves that the other party is genuine by providing the following via private conversation to each other after negotiations are complete and prior to dispatching goods and making payment:

    • Landline telephone number. Make a call to check out the area code and number are correct, too
    • Name and address including postcode
    • Valid e-mail address

    DO NOT proceed with a deal until you are completely satisfied with all details being correct. It’s in your best interest to check out these details yourself.

  2. NptonSaintsFan

    NptonSaintsFan

    Novice Member

    Joined:
    Jul 3, 2019
    Messages:
    7
    Products Owned:
    0
    Products Wanted:
    0
    Trophy Points:
    1
    Location:
    Northants
    Ratings:
    +0
  3. NptonSaintsFan

    NptonSaintsFan

    Novice Member

    Joined:
    Jul 3, 2019
    Messages:
    7
    Products Owned:
    0
    Products Wanted:
    0
    Trophy Points:
    1
    Location:
    Northants
    Ratings:
    +0

    bump still available

Share This Page

Loading…

Go to Original Article
Author:

Wanted – 13″ Macbook Pro 2016 or newer

Discussion in ‘Laptop, Notebook & Macbook Classifieds‘ started by tuttonp, Jul 16, 2019.

  1. tuttonp

    tuttonp

    Well-known Member

    Joined:
    Jun 10, 2003
    Messages:
    7,057
    Products Owned:
    0
    Products Wanted:
    0
    Trophy Points:
    103
    Location:
    Bristol
    Ratings:
    +343

    I’m looking for a decent upgrade to my current and trusty Late 2013 13″ MBP (2.4Ghz i5, 8GB RAM, 256GB HDD)

    Not too fussed on Touchbar vs non-touchbar.

    Good condition and less than a grand, the newer the better

    Location: BRISTOL

    ______________________________________________________
    This message is automatically inserted in all classifieds forum threads.
    By replying to this thread you agree to abide by the trading rules detailed here.
    Please be advised, all buyers and sellers should satisfy themselves that the other party is genuine by providing the following via private conversation to each other after negotiations are complete and prior to dispatching goods and making payment:

    • Landline telephone number. Make a call to check out the area code and number are correct, too
    • Name and address including postcode
    • Valid e-mail address

    DO NOT proceed with a deal until you are completely satisfied with all details being correct. It’s in your best interest to check out these details yourself.

  2. daymouse

    Well-known Member

    Joined:
    Feb 12, 2007
    Messages:
    6,279
    Products Owned:
    0
    Products Wanted:
    0
    Trophy Points:
    136
    Ratings:
    +1,064
  3. tuttonp

    tuttonp

    Well-known Member

    Joined:
    Jun 10, 2003
    Messages:
    7,057
    Products Owned:
    0
    Products Wanted:
    0
    Trophy Points:
    103
    Location:
    Bristol
    Ratings:
    +343

    Sadly that 2017 MBP on here a week back makes yours look a bit pricey, but, still possibly interested, I’ll see what else comes up

  4. daymouse

    Well-known Member

    Joined:
    Feb 12, 2007
    Messages:
    6,279
    Products Owned:
    0
    Products Wanted:
    0
    Trophy Points:
    136
    Ratings:
    +1,064

    You got a link to it?

  5. tuttonp

    tuttonp

    Well-known Member

    Joined:
    Jun 10, 2003
    Messages:
    7,057
    Products Owned:
    0
    Products Wanted:
    0
    Trophy Points:
    103
    Location:
    Bristol
    Ratings:
    +343
  6. daymouse

    Well-known Member

    Joined:
    Feb 12, 2007
    Messages:
    6,279
    Products Owned:
    0
    Products Wanted:
    0
    Trophy Points:
    136
    Ratings:
    +1,064

    Wow very cheap! Not sure what was going on there but as other things on here just because someone sells at low price doesn’t mean it brings eveything else down. Just the person that bought it got a very good one off deal.

    The only major difference will be a slightly faster CPU as thats all they generally refresh. Also note mine has just had the battery replaced so cycle count is at 1.

  7. tuttonp

    tuttonp

    Well-known Member

    Joined:
    Jun 10, 2003
    Messages:
    7,057
    Products Owned:
    0
    Products Wanted:
    0
    Trophy Points:
    103
    Location:
    Bristol
    Ratings:
    +343

    Yep fully aware of both points, I’ll see what comes up

  8. tomcannon

    tomcannon

    Active Member

    Joined:
    Jul 6, 2011
    Messages:
    271
    Products Owned:
    0
    Products Wanted:
    0
    Trophy Points:
    18
    Ratings:
    +9
  9. jacknory

    jacknory

    Active Member

    Joined:
    Jul 31, 2013
    Messages:
    1,129
    Products Owned:
    0
    Products Wanted:
    0
    Trophy Points:
    48
    Ratings:
    +53

    Hey

    You can have mine as you want? You posted on my previous thread here

    For Sale – Apple MacBook Pro 2017

  10. jacknory

    jacknory

    Active Member

    Joined:
    Jul 31, 2013
    Messages:
    1,129
    Products Owned:
    0
    Products Wanted:
    0
    Trophy Points:
    48
    Ratings:
    +53
  11. tuttonp

    tuttonp

    Well-known Member

    Joined:
    Jun 10, 2003
    Messages:
    7,057
    Products Owned:
    0
    Products Wanted:
    0
    Trophy Points:
    103
    Location:
    Bristol
    Ratings:
    +343

    Yes please. Drop me a PM with payment details

  12. jacknory

    jacknory

    Active Member

    Joined:
    Jul 31, 2013
    Messages:
    1,129
    Products Owned:
    0
    Products Wanted:
    0
    Trophy Points:
    48
    Ratings:
    +53

Share This Page

Loading…

Go to Original Article
Author:

For Sale – Surface pro 6

Discussion in ‘Laptop, Notebook & Macbook Classifieds‘ started by nif619, Jul 25, 2019 at 7:47 PM.

  1. nif619

    nif619

    Active Member

    Joined:
    Nov 24, 2007
    Messages:
    1,206
    Products Owned:
    0
    Products Wanted:
    0
    Trophy Points:
    48
    Location:
    dagenham
    Ratings:
    +58

    Have the above for sale.
    Absolutely mint, been used only a few times.
    Bought from John Lewis and receipt will be provided.
    Warranty runs until November 2020.
    Comes with signature type cover which is the more expensive premium type cover.
    Original Surface charger.

    8th gen I5 processor
    128gb storage
    8gb ram.

    Looking to raise money for my holiday next week so looking to sell sharpish.

    Price and currency: 650
    Delivery: Delivery cost is included within my country
    Payment method: BT / PP
    Location: Dagenham, Essex
    Advertised elsewhere?: Not advertised elsewhere
    Prefer goods collected?: I prefer the goods to be collected

    ______________________________________________________
    This message is automatically inserted in all classifieds forum threads.
    By replying to this thread you agree to abide by the trading rules detailed here.
    Please be advised, all buyers and sellers should satisfy themselves that the other party is genuine by providing the following via private conversation to each other after negotiations are complete and prior to dispatching goods and making payment:

    • Landline telephone number. Make a call to check out the area code and number are correct, too
    • Name and address including postcode
    • Valid e-mail address

    DO NOT proceed with a deal until you are completely satisfied with all details being correct. It’s in your best interest to check out these details yourself.

    Last edited: Jul 25, 2019 at 8:15 PM

Share This Page

Loading…

Go to Original Article
Author:

For Sale – MacBook Pro 15 Late 2018 & LG 27″ 5k Monitor

Upgraded to the 2019 model, so no longer require the 2018.

It’s a highly specced model with:

Top spec core i9 6-core 2.9GHz
Radeon Pro 560X
32GB DDR4 2400MHz
2TB SSD
Space Gray

Plus applecare, expiry July 2021

All in good working order, only exception being two cosmetic marks at the front of the unit. Other than these two marks, it is in perfect condition. No chassis scratches I can see, screen spotless, Touch Bar spotless, keyboard clean and no issues, touchpad spotless, wrist rests spotless etc etc…

Comes with original box & accesories.

Original purchase price £4,409 for the laptop & £329 for AppleCare.

Asking price £2,850 including shipping. Collection / meet in London preferred.

Apple asking £3279 for the same model refurbished without AppleCare – LINK – so a saving of £600 to compensate for the light cosmetic damage.

[​IMG]IMG_8831 by CosmicLogos, on Flickr

[​IMG]IMG_0404 by CosmicLogos, on Flickr

[​IMG]IMG_5346 by CosmicLogos, on Flickr

[​IMG]IMG_0679 by CosmicLogos, on Flickr

[​IMG]IMG_4427 by CosmicLogos, on Flickr

[​IMG]IMG_2054 by CosmicLogos, on Flickr

[​IMG]IMG_4365 by CosmicLogos, on Flickr

Also selling LG 27″ 5k Monitor

Don’t have the space for this anymore, otherwise it’s a great piece of kit…

Only used a few days. Bought in November 2018, warranty until Nov 2021.

Perfect condition. No box, so collection only or personal delivery can be arranged within reasonable distances or near where I’m travelling.

Asking price is £800

[​IMG]IMG_4924 by CosmicLogos, on Flickr

Price and currency: £2850 / £800
Delivery: Delivery cost is included within my country
Payment method: BT
Location: Blackfriars, London
Advertised elsewhere?: Advertised elsewhere
Prefer goods collected?: I prefer the goods to be collected

______________________________________________________
This message is automatically inserted in all classifieds forum threads.
By replying to this thread you agree to abide by the trading rules detailed here.
Please be advised, all buyers and sellers should satisfy themselves that the other party is genuine by providing the following via private conversation to each other after negotiations are complete and prior to dispatching goods and making payment:

  • Landline telephone number. Make a call to check out the area code and number are correct, too
  • Name and address including postcode
  • Valid e-mail address

DO NOT proceed with a deal until you are completely satisfied with all details being correct. It’s in your best interest to check out these details yourself.

Go to Original Article
Author:

What is the Hyper-V Core Scheduler?

In the past few years, sophisticated attackers have targeted vulnerabilities in CPU acceleration techniques. Cache side-channel attacks represent a significant danger. They magnify on a host running multiple virtual machines. One compromised virtual machine can potentially retrieve information held in cache for a thread owned by another virtual machine. To address such concerns, Microsoft developed its new “HyperClear” technology pack. HyperClear implements multiple mitigation strategies. Most of them work behind the scenes and require no administrative effort or education. However, HyperClear also includes the new “core scheduler”, which might need you to take action.

The Classic Scheduler

Now that Hyper-V has all new schedulers, its original has earned the “classic” label. I wrote an article on that scheduler some time ago. The advanced schedulers do not replace the classic scheduler so much as they hone it. So, you need to understand the classic scheduler in order to understand the core scheduler. A brief recap of the earlier article:

  • You assign a specific number of virtual CPUs to a virtual machine. That sets the upper limit on how many threads the virtual machine can actively run.
  • When a virtual machine assigns a thread to a virtual CPU, Hyper-V finds the next available logical processor to operate it.

To keep it simple, imagine that Hyper-V assigns threads in round-robin fashion. Hyper-V does engage additional heuristics, such as trying to keep a thread with its owned memory in the same NUMA node. It also knows about simultaneous multi-threading (SMT) technologies, including Intel’s Hyper-Threading and AMD’s recent advances. That means that the classic scheduler will try to place threads where they can get the most processing power. Frequently, a thread shares a physical core with a completely unrelated thread — perhaps from a different virtual machine.

Risks with the Classic Scheduler

The classic scheduler poses a cross-virtual machine data security risk. It stems from the architectural nature of SMT: a single physical core can run two threads but has only one cache.

Classic SchedulerIn my research, I discovered several attacks in which one thread reads cached information belonging to the other. I did not find any examples of one thread polluting the others’ data. I also did not see anything explicitly preventing that sort of assault.

On a physically installed operating system, you can mitigate these risks with relative ease by leveraging antimalware and following standard defensive practices. Software developers can make use of fencing techniques to protect their threads’ cached data. Virtual environments make things harder because the guest operating systems and binary instructions have no influence on where the hypervisor places threads.

The Core Scheduler

The core scheduler makes one fairly simple change to close the vulnerability of the classic scheduler: it never assigns threads from more than one virtual machine to any physical core. If it can’t assign a second thread from the same VM to the second logical processor, then the scheduler leaves it empty. Even better, it allows the virtual machine to decide which threads can run together.

Hyper-V Core Scheduler

We will move on through implementation of the scheduler before discussing its impact.

Implementing Hyper-V’s Core Scheduler

The core scheduler has two configuration points:

  1. Configure Hyper-V to use the core scheduler
  2. Configure virtual machines to use two threads per virtual core

Many administrators miss that second step. Without it, a VM will always use only one logical processor on its assigned cores. Each virtual machine has its own independent setting.

We will start by changing the scheduler. You can change the scheduler at a command prompt (cmd or PowerShell) or by using Windows Admin Center.

How to Use the Command Prompt to Enable and Verify the Hyper-V Core Scheduler

For Windows and Hyper-V Server 2019, you do not need to do anything at the hypervisor level. You still need to set the virtual machines. For Windows and Hyper-V Server 2016, you must manually switch the scheduler type.

You can make the change at an elevated command prompt (PowerShell prompt is fine):

Note: if bcdedit does not accept the setting, ensure that you have patched the operating system.

Reboot the host to enact the change. If you want to revert to the classic scheduler, use “classic” instead of “core”. You can also select the “root” scheduler, which is intended for use with Windows 10 and will not be discussed further here.

To verify the scheduler, just run bcdedit by itself and look at the last line:

bcdedit

bcdedit will show the scheduler type by name. It will always appear, even if you disable SMT in the host’s BIOS/UEFI configuration.

How to Use Windows Admin Center to Enable the Hyper-V Core Scheduler

Alternatively, you can use Windows Admin Center to change the scheduler.

  1. Use Windows Admin Center to open the target Hyper-V host.
  2. At the lower left, click Settings. In most browsers, it will hide behind any URL tooltip you might have visible. Move your mouse to the lower left corner and it should reveal itself.
  3. Under Hyper-V Host Settings sub-menu, click General.
  4. Underneath the path options, you will see Hypervisor Scheduler Type. Choose your desired option. If you make a change, WAC will prompt you to reboot the host.

windows admin center

Note: If you do not see an option to change the scheduler, check that:

  • You have a current version of Windows Admin Center
  • The host has SMT enabled
  • The host runs at least Windows Server 2016

The scheduler type can change even if SMT is disabled on the host. However, you will need to use bcdedit to see it (see previous sub-section).

Implementing SMT on Hyper-V Virtual Machines

With the core scheduler enabled, virtual machines can no longer depend on Hyper-V to make the choice to use a core’s second logical processor. Hyper-V will expect virtual machines to decide when to use the SMT capabilities of a core. So, you must enable or disable SMT capabilities on each virtual machine just like you would for a physical host.

Because of the way this technology developed, the defaults and possible settings may seem unintuitive. New in 2019, newly-created virtual machines can automatically detect the SMT status of the host and hypervisor and use that topology. Basically, they act like a physical host that ships with Hyper-Threaded CPUs — they automatically use it. Virtual machines from previous versions need a bit more help.

Every virtual machine has a setting named “HwThreadsPerCore”. The property belongs to the Msvm_ProcessorSettingData CIM class, which connects to the virtual machine via its Msvm_Processor associated instance. You can drill down through the CIM API using the following PowerShell (don’t forget to change the virtual machine name):

The output of the cmdlet will present one line per virtual CPU. If you’re worried that you can only access them via this verbose technique hang in there! I only wanted to show you where this information lives on the system. You have several easier ways to get to and modify the data. I want to finish the explanation first.

The HwThreadsPerCore setting can have three values:

  • 0 means inherit from the host and scheduler topology — limited applicability
  • 1 means 1 thread per core
  • 2 means 2 threads per core

The setting has no other valid values.

A setting of 0 makes everything nice and convenient, but it only works in very specific circumstances. Use the following to determine defaults and setting eligibility:

  • VM config version < 8.0
    • Setting is not present
    • Defaults to 1 if upgraded to VM version 8.x
    • Defaults to 0 if upgraded to VM version 9.0+
  • VM config version 8.x
    • Defaults to 1
    • Cannot use a 0 setting (cannot inherit)
    • Retains its setting if upgraded to VM version 9.0+
  • VM config version 9.x
    • Defaults to 0

I will go over the implications after we talk about checking and changing the setting.

You can see a VM’s configuration version in Hyper-V Manager and PowerShell’s Get-VM :

Hyper-V Manager

The version does affect virtual machine mobility. I will come back to that topic toward the end of the article.

How to Determine a Virtual Machine’s Threads Per Core Count

Fortunately, the built-in Hyper-V PowerShell module provides direct access to the value via the *-VMProcessor cmdlet family. As a bonus, it simplifies the input and output to a single value. Instead of the above, you can simply enter:

If you want to see the value for all VMs:

You can leverage positional parameters and aliases to simplify these for on-the-fly queries:

You can also see the setting in recent version of Hyper-V Manager (Windows Server 2019 and current versions of Windows 10). Look on the NUMA sub-tab of the Processor tab. Find the Hardware threads per core setting:

settings

In Windows Admin Center, access a virtual machine’s Processor tab in its settings. Look for Enable Simultaneous Multithreading (SMT).

processors

If the setting does not appear, then the host does not have SMT enabled.

How to Set a Virtual Machine’s Threads Per Core Count

You can easily change a virtual machine’s hardware thread count. For either the GUI or the PowerShell commands, remember that the virtual machine must be off and you must use one of the following values:

  • 0 = inherit, and only works on 2019+ and current versions of Windows 10 and Windows Server SAC
  • 1 = one thread per hardware core
  • 2 = two threads per hardware core
  • All values above 2 are invalid

To change the setting in the GUI or Windows Admin Center, access the relevant tab as shown in the previous section’s screenshots and modify the setting there. Remember that Windows Admin Center will hide the setting if the host does not have SMT enabled. Windows Admin Center does not allow you to specify a numerical value. If unchecked, it will use a value of 1. If checked, it will use a value of 2 for version 8.x VMs and 0 for version 9.x VMs.

To change the setting in PowerShell:

To change the setting for all VMs in PowerShell:

Note on the cmdlet’s behavior: If the target virtual machine is off, the setting will work silently with any valid value. If the target machine is on and the setting would have no effect, the cmdlet behaves as though it made the change. If the target machine is on and the setting would have made a change, PowerShell will error. You can include the -PassThru parameter to receive the modified vCPU object:

Considerations for Hyper-V’s Core Scheduler

I recommend using the core scheduler in any situation that does not explicitly forbid it. I will not ask you to blindly take my advice, though. The core scheduler’s security implications matter, but you also need to think about scalability, performance, and compatibility.

Security Implications of the Core Scheduler

This one change instantly nullifies several exploits that could cross virtual machines, most notably in the Spectre category. Do not expect it to serve as a magic bullet, however. In particular, remember that an exploit running inside a virtual machine can still try to break other processes in the same virtual machine. By extension, the core scheduler cannot protect against threats running in the management operating system. It effectively guarantees that these exploits cannot cross partition boundaries.

For the highest level of virtual machine security, use the core scheduler in conjunction with other hardening techniques, particularly Shielded VMs.

Scalability Impact of the Core Scheduler

I have spoken with one person who was left with the impression that the core scheduler does not allow for oversubscription. They called into Microsoft support, and the engineer agreed with that assessment. I reviewed Microsoft’s public documentation as it was at the time, and I understand how they reached that conclusion. Rest assured that you can continue to oversubscribe CPU in Hyper-V. The core scheduler prevents threads owned by separate virtual machines from running simultaneously on the same core. When it starts a thread from a different virtual machine on a core, the scheduler performs a complete context switch.

You will have some reduced scalability due to the performance impact, however.

Performance Impact of the Core Scheduler

On paper, the core scheduler presents severe deleterious effects on performance. It reduces the number of possible run locations for any given thread. Synthetic benchmarks also show a noticeable performance reduction when compared to the classic scheduler. A few points:

  • Generic synthetic CPU benchmarks drive hosts to abnormal levels using atypical loads. In simpler terms, they do not predict real-world outcomes.
  • Physical hosts with low CPU utilization will experience no detectable performance hits.
  • Running the core scheduler on a system with SMT enabled will provide better performance than the classic scheduler on the same system with SMT disabled

Your mileage will vary. No one can accurately predict how a general-purpose system will perform after switching to the core scheduler. Even a heavily-laden processor might not lose anything. Remember that, even in the best case, an SMT-enabled core will not provide more than about a 25% improvement over the same core with SMT disabled. In practice, expect no more than a 10% boost. In the simplest terms: switching from the classic scheduler to the core scheduler might reduce how often you enjoy a 10% boost from SMT’s second logical processor. I expect few systems to lose much by switching to the core scheduler.

Some software vendors provide tools that can simulate a real-world load. Where possible, leverage those. However, unless you dedicate an entire host to guests that only operate that software, you still do not have a clear predictor.

Compatibility Concerns with the Core Scheduler

As you saw throughout the implementation section, a virtual machine’s ability to fully utilize the core scheduler depends on its configuration version. That impacts Hyper-V Replica, Live Migration, Quick Migration, virtual machine import, backup, disaster recovery, and anything else that potentially involves hosts with mismatched versions.

Microsoft drew a line with virtual machine version 5.0, which debuted with Windows Server 2012 R2 (and Windows 8.1). Any newer Hyper-V host can operate virtual machines of its version all the way down to version 5.0. On any system, run  Get-VMHostSupportedVersion to see what it can handle. From a 2019 host:

So, you can freely move version 5.0 VMs between a 2012 R2 host and a 2016 host and a 2019 host. But, a VM must be at least version 8.0 to use the core scheduler at all. So, when a v5.0 VM lands on a host running the core scheduler, it cannot use SMT. I did not uncover any problems when testing an SMT-disabled guest on an SMT-enabled host or vice versa. I even set up two nodes in a cluster, one with Hyper-Threading on and the other with Hyper-Threading off, and moved SMT-enabled and SMT-disabled guests between them without trouble.

The final compatibility verdict: running old virtual machine versions on core-scheduled systems means that you lose a bit of density, but they will operate.

Summary of the Core Scheduler

This is a lot of information to digest, so let’s break it down to its simplest components. The core scheduler provides a strong inter-virtual machine barrier against cache side-channel attacks, such as the Spectre variants. Its implementation requires an overall reduction in the ability to use simultaneous multi-threaded (SMT) cores. Most systems will not suffer a meaningful performance penalty. Virtual machines have their own ability to enable or disable SMT when running on a core-scheduled system. All virtual machine versions prior to 8.0 (WS2016/W10 Anniversary) will only use one logical processor per core when running on a core-scheduled host.

Go to Original Article
Author: Eric Siron

Analyst, author talks enterprise AI expectations

For years, promoters have made AI technologies sound like the all-encompassing technology answer for enterprises, a ready-to-use piece of software that could solve all of an organization’s data and workflow problems with minimal effort.

While AI can certainly automate parts of a workflow and save employees and organizations time and money, it rarely requires no work or no integration to set up, something organizations are still struggling to understand.

In this Q&A ahead of the publication of his new book, Alan Pelz-Sharpe, founder of market advisory and research firm Deep Analysis, describes enterprises’ AI expectations and helps distinguish between realistic AI goals and expectations and the AI hype.

An AI project is different from a traditional IT project, Pelz-Sharpe said, and organizations should treat it as such.

“One of the reasons so many projects fail is people do not know that,” he said.

In this Q&A, Pelz-Sharpe, who is also a part-time voice and film actor, talks about AI expectations, deploying AI, and the realities of the technology.

Have you found that business users and enterprises have an accurate description of AI?

Alan Pelz-Sharpe: No. It’s a straightforward no. I’ll give you real world examples.

Alan Pelz-Sharpe, AI ExpectationsAlan Pelz-Sharpe

A very large, very well-known financial services company brought in the biggest vendor. They spent six and a half million dollars. Four months later, they fired the vendor, because they had nothing to show for it. They talked to me and it was heartbreaking, because I wanted to say to them, ‘Why? Why did you ever engage with them?’

It wasn’t because they were bad people engaged in this, but because they had very specific sets of circumstances and really, really specific requirements. I said to them, ‘You’re never going to buy this off the shelf, it doesn’t exist. You’re going to have to develop this yourself.’ Now, that’s what they’re doing, and they’re spending a lot less money and having a lot more success.

AI is being so overhyped; your heart goes out to buyers, because they don’t know who to believe. In some cases, they could save a fortune, go to some small startup [that] could, frankly, give them the product and get the job done. They don’t know that.

Are these cases of enterprises’ having the wrong AI expectations and not knowing what they want, or they are cases of a vendor misleading a buyer?

It’s absolutely both. Vendors have overhyped and oversold. Then the perception is I buy this tool, I plug it in, and it does its magic. It just isn’t like that ever. It’s never like that. That’s nonsense. So, the vendors are definitely guilty of that, but when haven’t they been?

From the buyer’s perspective, I think there are two things really. One, they don’t know. They don’t understand, they haven’t been told that they have to run this project very differently from an IT project. It’s not a traditional IT project in which you scope it out, decide what you’re going to use, test it and then it goes live. AI isn’t like that. It’s a lifetime investment. You’re never going to completely leave it alone, you’re always going to be involved with it.

Technically, there’s a perception that bigger and more powerful is better. Well, is it? If you’re trying to automatically classify statements versus purchases versus invoices, the usual back office paper stuff, why do you need the most powerful product? Why not, instead, just buy something simple, something that’s designed to do exactly that?

Often, buyers get themselves into deep waters. They buy a big Mack Truck to do what a small tricycle could do. That’s actually really common. Most real-world business use cases are surprisingly narrow and targeted.

Editor’s note: This interview has been edited for clarity and conciseness.

Go to Original Article
Author:

New machine learning model sifts through the good to unearth the bad in evasive malware – Microsoft Security

We continuously harden machine learning protections against evasion and adversarial attacks. One of the latest innovations in our protection technology is the addition of a class of hardened malware detection machine learning models called monotonic models to Microsoft Defender ATP‘s Antivirus.

Historically, detection evasion has followed a common pattern: attackers would build new versions of their malware and test them offline against antivirus solutions. They’d keep making adjustments until the malware can evade antivirus products. Attackers then carry out their campaign knowing that the malware won’t initially be blocked by AV solutions, which are then forced to catch up by adding detections for the malware. In the cybercriminal underground, antivirus evasion services are available to make this process easier for attackers.

Microsoft Defender ATP’s Antivirus has significantly advanced in becoming resistant to attacker tactics like this. A sizeable portion of the protection we deliver are powered by machine learning models hosted in the cloud. The cloud protection service breaks attackers’ ability to test and adapt to our defenses in an offline environment, because attackers must either forgo testing, or test against our defenses in the cloud, where we can observe them and react even before they begin.

Hardening our defenses against adversarial attacks doesn’t end there. In this blog we’ll discuss a new class of cloud-based ML models that further harden our protections against detection evasion.

Most machine learning models are trained on a mix of malicious and clean features. Attackers routinely try to throw these models off balance by stuffing clean features into malware.

Monotonic models are resistant against adversarial attacks because they are trained differently: they only look for malicious features. The magic is this: Attackers can’t evade a monotonic model by adding clean features. To evade a monotonic model, an attacker would have to remove malicious features.

Monotonic models explained

Last summer, researchers from UC Berkeley (Incer, Inigo, et al, “Adversarially robust malware detection using monotonic classification”, Proceedings of the Fourth ACM International Workshop on Security and Privacy Analytics, ACM, 2018) proposed applying a technique of adding monotonic constraints to malware detection machine learning models to make models robust against adversaries. Simply put, the said technique only allows the machine learning model to leverage malicious features when considering a file – it’s not allowed to use any clean features.

Figure 1. Features used by a baseline versus a monotonic constrained logistic regression classifier. The monotonic classifier does not use cleanly-weighted features so that it’s more robust to adversaries.

Inspired by the academic research, we deployed our first monotonic logistic regression models to Microsoft Defender ATP cloud protection service in late 2018. Since then, they’ve played an important part in protecting against attacks.

Figure 2 below illustrates the production performance of the monotonic classifiers versus the baseline unconstrained model. Monotonic-constrained models expectedly have lower outcome in detecting malware overall compared to classic models. However, they can detect malware attacks that otherwise would have been missed because of clean features.

Figure 2. Malware detection machine learning classifiers comparing the unconstrained baseline classifier versus the monotonic constrained classifier in customer protection.

The monotonic classifiers don’t replace baseline classifiers; they run in addition to the baseline and add additional protection. We combine all our classifiers using stacked classifier ensembles–monotonic classifiers add significant value because of the unique classification they provide.

How Microsoft Defender ATP uses monotonic models to stop adversarial attacks

One common way for attackers to add clean features to malware is to digitally code-sign malware with trusted certificates. Malware families like ShadowHammer, Kovter, and Balamid are known to abuse certificates to evade detection. In many of these cases, the attackers impersonate legitimate registered businesses to defraud certificate authorities into issuing them trusted code-signing certificates.

LockerGoga, a strain of ransomware that’s known for being used in targeted attacks, is another example of malware that uses digital certificates. LockerGoga emerged in early 2019 and has been used by attackers in high-profile campaigns that targeted organizations in the industrial sector. Once attackers are able breach a target network, they use LockerGoga to encrypt enterprise data en masse and demand ransom.

Figure 3. LockerGoga variant digitally code-signed with a trusted CA

When Microsoft Defender ATP encounters a new threat like LockerGoga, the client sends a featurized description of the file to the cloud protection service for real-time classification. An array of machine learning classifiers processes the features describing the content, including whether attackers had digitally code-signed the malware with a trusted code-signing certificate that chains to a trusted CA. By ignoring certificates and other clean features, monotonic models in Microsoft Defender ATP can correctly identify attacks that otherwise would have slipped through defenses.

Very recently, researchers demonstrated an adversarial attack that appends a large volume of clean strings from a computer game executable to several well-known malware and credential dumping tools – essentially adding clean features to the malicious files – to evade detection. The researchers showed how this technique can successfully impact machine learning prediction scores so that the malware files are not classified as malware. The monotonic model hardening that we’ve deployed in Microsoft Defender ATP is key to preventing this type of attack, because, for a monotonic classifier, adding features to a file can only increase the malicious score.

Given how they significantly harden defenses, monotonic models are now standard components of machine learning protections in Microsoft Defender ATP‘s Antivirus. One of our monotonic models uniquely blocks malware on an average of 200,000 distinct devices every month. We now have three different monotonic classifiers deployed, protecting against different attack scenarios.

Monotonic models are just the latest enhancements to Microsoft Defender ATP’s Antivirus. We continue to evolve machine learning-based protections to be more resilient to adversarial attacks. More effective protections against malware and other threats on endpoints increases defense across the entire Microsoft Threat Protection. By unifying and enabling signal-sharing across Microsoft’s security services, Microsoft Threat Protection secures identities, endpoints, email and data, apps, and infrastructure.

Geoff McDonald (@glmcdona),Microsoft Defender ATP Research team
with Taylor Spangler, Windows Data Science team


Talk to us

Questions, concerns, or insights on this story? Join discussions at the Microsoft Defender ATP community.

Follow us on Twitter @MsftSecIntel.

Go to Original Article
Author: Microsoft News Center

Kubernetes authentication project wrestles with migration problems

Kubernetes has not only matured, it’s showing a wrinkle or two.

In case there was any doubt that Kubernetes is no longer a fledgling project, it has begun to show its age with the accumulation of technical debt, as upstream developers struggle to smooth the migration process for a core set of security improvements.

Kubernetes authentication and role-based access control (RBAC) have improved significantly since the project reached version 1.0 four years ago. But one aspect of Kubernetes authentication management remains stuck in the pre-1.0 era: the management of access tokens that secure connections between Kubernetes pods and the Kubernetes API server.

Workload Identity for Google Kubernetes Engine (GKE), released last month, illuminated this issue. Workload Identity is now Google’s recommended approach to Kubernetes authentication between containerized GKE application services and GCP utilities because Workload Identity users no longer have to manage Kubernetes secrets or Google Cloud IAM service account keys themselves. Workload Identity also manages secrets rotation, and can set fine-grained secrets expiration and intended audience policies that ensure good Kubernetes security hygiene.

GKE users are champing at the bit to use Workload Identity because it could eliminate the need to manage this aspect of Kubernetes authentication with a third-party tool such as HashiCorp’s Vault, or worse, with error-prone manual techniques.

“Without this, you end up having to manage a bunch of secrets and access [controls] yourself, which creates lots of opportunities for human error, which has actually bitten us in the past,” said Josh Koenig, head of product at Pantheon Platform, a web operations platform in San Francisco that hosts more than 200,000 Drupal and WordPress sites.

Koenig recalled an incident where a misconfigured development cluster could have exposed an SSL certificate. Pantheon’s IT team caught the mistake, and tore down and reprovisioned the cluster. Pantheon also uses Vault in its production clusters to guard against such errors for more critical workloads but didn’t want that tool’s management overhead to slow down provisioning in the development environment.

“It’s a really easy mistake to make, even with the best of intentions, and the best answer to that is just not having to manage security as part of your codebase,” Koenig said. “There are ways to do that that are really expensive and heavy to implement, and then there’s Google doing it for you [with Workload Identity].”

Kubernetes authentication within clusters lags Workload Identity

Workload Identity uses updated Kubernetes authentication tokens, available in beta since Kubernetes 1.12, to more effectively and efficiently authenticate Kubernetes-based services as they interact with other Google Cloud Platform (GCP) services. Such communications are much more likely than communications within Kubernetes clusters to be targeted by attackers, and there are other ways to shore up security for intracluster communications, including using OpenID Connect tokens based on OAuth 2.0.

However, the SIG-Auth group within the Kubernetes upstream community is working to bring Kubernetes authentication tokens that pass between Kubernetes Pods up to speed with the Workload Identity standard, with automatic secrets rotation, expiration and limited-audience policies. This update would affect all Kubernetes environments, including those that run in GKE, and could allow cloud service providers to assume even more of the Kubernetes authentication management burden on behalf of users.

We think [API Server authentication tokens] improves the default security posture of Kubernetes open source.
Mike DaneseSoftware engineer, Google; chair, Kubernetes SIG-Auth

“Our legacy tokens were only meant for use against the Kubernetes API Server, and using those tokens against something like Vault or Google poses some potential escalations or security risks,” said Mike Danese, a Google software engineer and the chair of Kubernetes SIG-Auth. “We’ll push on [API Server authentication tokens] a little bit because we think it improves the default security posture of Kubernetes open source.”

There’s a big problem, however. A utility to migrate older Kubernetes authentication tokens used with the Kubernetes API server to the new system, dubbed Bound Service Account Token Volumes and available in alpha since Kubernetes 1.13, has hit a show-stopping snag. Early users have encountered a compatibility issue when they try to migrate to the new tokens, which require a new kind of storage volume for authentication data that can be difficult to configure.

Without the new tokens, Kubernetes authentication between pods and the API Server could theoretically face the same risk of human error or mismanagement as GKE services before Workload Identity was introduced, IT experts said.

“The issues that are discussed as the abilities of Workload Identity — to reduce blast radius, require the refresh of tokens, enforce a short time to live and bind them to a single pod — would be the potential impact for current Kubernetes users” if the tokens aren’t upgraded eventually, said Chris Riley, DevOps delivery director at CPrime Inc., an Agile software development consulting firm in Foster City, Calif.

Kubernetes authentication upgrade beta delayed

After Kubernetes 1.13 was rolled out in December 2018, reports indicated that a beta release for Bound Service Account Token Volumes would arrive by Kubernetes 1.15, which was released in June 2019, or Kubernetes 1.16, due in September 2019.

However, the feature did not reach beta in 1.15, and is not expected to reach beta in 1.16, Danese said. It’s also still unknown how the eventual migration to new Kubernetes authentication tokens for the API Server will affect GKE users.

“We could soften our stance on some of the improvements to break legacy clients in fewer ways — for example, we’ve taken a hard stance on file permissions, which disrupted some legacy clients, binding specifically to the Kubernetes API server,” Danese said. “We have some subtle interaction with other security features like Pod Security Policy that we could potentially pave over by making changes or allowances in Pod Security Policies that would allow old Pod Security Policies to use the new tokens instead of the legacy tokens.”

Widespread community attention to this issue seems limited, Riley said, and it could be some time before it’s fully addressed.

However, he added, it’s an example of the reasons Kubernetes users among his consulting clients increasingly turn to cloud service providers to manage their complex container orchestration environment.

“The value of services like Workload Identity is the additional integration these providers are doing into other cloud services, simplifying the adoption and increasing overall reliability of Kubernetes,” Riley said. “I expect them to do the hard work and support migration or simple adoption of future technologies.”

Go to Original Article
Author:

Social determinants of health data provide better care

Social determinants of health data can help healthcare organizations deliver better patient care, but the challenge of knowing exactly how to use the data persists.

The healthcare community has long-recognized the importance of a patient’s social and economic data, said Josh Schoeller, senior vice president and general manager of LexisNexis Health Care at LexisNexis Risk Solutions. The current shift to value-based care models, which are ruled by quality rather than quantity of care, has put a spotlight on this kind of data, according to Schoeller.

But social determinants of health also pose a challenge to healthcare organizations. Figuring out how to use the data in meaningful ways can be daunting, as healthcare organizations are already overwhelmed by loads of data.

A new framework, released last month, by the not-for-profit eHealth Initiative Foundation, could help. The framework was developed by stakeholders, including LexisNexis Health Care, to give healthcare organizations guidance on how to use social determinants of health data ethically and securely.

Here’s a closer look at the framework.

Use cases for social determinants of health data

The push to include social determinants of health data into the care process is “imperative,” according to eHealth Initiative’s framework. Doing so can uncover potential risk factors, as well as gaps in care.

The eHealth Initiative’s framework outlines five guiding principles for using social determinants of health data. 

  1. Coordinating care

Determine if a patient has access to transportation or is food is insecure, according to the document. The data can also help a healthcare organization coordinate with community health workers and other organizations to craft individualized care plans.

  1. Using analytics to uncover health and wellness risks

Use social determinants of health data to predict a patient’s future health outcomes. Analyzing social and economic data can help the provider know if an individual is at an increased risk of having a negative health outcome, such as hospital re-admittance. The risk score can be used to coordinate a plan of action.

  1. Mapping community resources and identifying gaps

Use social determinants of health data to determine what local community resources exist to serve the patient populations, as well as what resources are lacking.

  1. Assessing service and impact

Monitor care plans or other actions taken using social determinants of health data and how it correlates to health outcomes. Tracking results can help an organization adjust interventions, if necessary.

  1. Customizing health services and interventions

Inform patients about how social determinants of health data are being used. Healthcare organizations can educate patients on available resources and agree on next steps to take.

Getting started: A how-to for healthcare organizations

The eHealth Initiative is not alone in its attempt to move the social determinants of health data needle.

Niki Buchanan, general manager of population health at Philips Healthcare, has some advice of her own.

  1. Lean on the community health assessment

Buchanan said most healthcare organizations conduct a community health assessment internally, which provides data such as demographics and transportation needs, and identifies at-risk patients. Having that data available and knowing whether patients are willing or able to take advantage of community resources outside of the doctor’s office is critical, she said.

Look for things that meet not only your own internal ROI in caring for your patients, but that also add value and patient engagement opportunities to those you’re trying to serve in a more proactive way.
Niki BuchananGeneral manager of population health management, Philips Healthcare

  1. Connect the community resource dots

Buchanan said a healthcare organization should be aware of what community resources are available to them, whether it’s a community driving service or a local church outreach program. The organization should also assess at what level it is willing to partner with outside resources to care for patients.

“Are you willing to partner with the Ubers of the world, the Lyfts of the world, to pick up patients proactively and make sure they make it to their appointment on time and get them home,” she said. “Are you able to work within the local chamber of commerce to make sure that any time there’s a food market or a fresh produce kind of event within the community, can you make sure the patients you serve have access?”

  1. Start simple

Buchanan said healthcare organizations should approach social determinants of health data with the patient in mind. She recommended healthcare organizations start small with focused groups of patients, such as diabetics or those with other chronic conditions, but that they also ensure the investment is a worthwhile one.

“Look for things that meet not only your own internal ROI in caring for your patients, but that also add value and patient engagement opportunities to those you’re trying to serve in a more proactive way,” she said.

Go to Original Article
Author:

Increases expected from new Microsoft Dynamics 365 pricing

Upcoming changes to Microsoft Dynamics 365 pricing will lead to lower licensing fees for some users while possibly raising the cost of the cloud-based business applications platform for organizations with 100-plus seats.

Microsoft described the changes, scheduled to take effect Oct. 1, in a blog post this week. The company gave partners advance notice of the new pricing scheme at last week’s Inspire conference in Las Vegas.

Midsize and larger organizations that use more than one application could face significant increases, because of Microsoft’s decision to unbundle Dynamics 365 apps and sell them a la carte. Currently, the cloud apps come priced as bundles, with many companies on a Customer Engagement Plan at $115 per user/month. The plan includes five core applications — Sales, Customer Service, Field Service, Project Service Automation and Marketing.

The new individual pricing would cost $95 per user/month for one app, with $20 “attach licenses” for additional apps. Alysa Taylor, corporate vice president for business applications at Microsoft, said in the blog post that customers preferred having the option of adding or removing applications as their companies grew and changed over time.

 Microsoft Dynamics 365 pricing to take effect October 2019
Microsoft Dynamics 365 CRM ‘attach pricing’

But users of Dynamics 365 CRM software would likely pay more for that convenience, Dolores Ianni, an analyst at Gartner, said. Those customers typically employ multiple applications in the Customer Engagement Plan.

I feel that a majority of renewing customers are going to be substantially impacted by this change.
Dolores IanniAnalyst, Gartner

“If you’re using four applications and you had a thousand users, well, your price went up 158% — it varies wildly,” Ianni said. “I feel that a majority of renewing customers are going to be substantially impacted by this change.”

Microsoft claims 80% of its customers are using only one application, but anecdotal evidence indicates otherwise. Conversations with Microsoft customers — and a review of their contracts — show that the largest enterprises with 1,000 or more users could pay substantially more in some cases, Ianni said. Organizations with 100 or more users could also pay more, even if they’re using only one application. Companies with more than one application would get hit harder.

Microsoft could offer its largest customers promotional deals that would mitigate the price hike, Ianni said. The company often provides such breaks when changing pricing.

Readiness tips for Dynamics 365 pricing changes

Organizations should prepare for the new pricing structures by analyzing which employees use which applications today. Businesses can sometimes find ways to cut costs after getting a complete understanding of how workers are using the software.

“They’re going to have to do their homework to a much greater degree than they did in the past,” Ianni said.

Go to Original Article
Author: