Tag Archives: years

Microsoft releases 18M building footprints in Africa to enable AI Assisted Mapping

In the last ten years, 2 billion people were affected by disasters according to the World Disasters report 2018. In 2017, 201 million people needed humanitarian assistance and 18 million were displaced due to weather related disasters. Many of these disaster-prone areas are literally “missing” from the map, making it harder for first responders to prepare and deliver relief efforts.

Since the inception of Tasking Manager, the Humanitarian OpenStreetMap Team (HOT) community has mapped at an incredible rate with 11 million square kilometers mapped in Africa alone. However, large parts of Africa with populations prone to disasters still remain unmapped — 60% of the 30 million square kilometers.

Under Microsoft’s AI for Humanitarian Action program, Bing Maps together with Microsoft Philanthropies is partnering with HOT on an initiative to bring AI Assistance as a resource in open map building. The initiative focuses on incorporating design updates, integrating machine learning, and bringing new open building datasets into Tasking Manager.

The Bing Maps team has been harnessing the power of Computer Vision to identify map features at scale. Building upon their work in the United States and Canada, Bing Maps is now releasing country-wide open building footprints datasets in Uganda and Tanzania. This will be one of the first open building datasets in Africa and will be available for use within OpenStreetMap (OSM).

In Tasking Manager specifically, the dataset will be used to help in task creation with the goal of improving task completion rates. Tasking Manager relies on ‘ML enabler’ to connect with building datasets through an API. This API-based integration makes it convenient to access not just Africa building footprints, but all open building footprints datasets from Bing Maps through ML Enabler, and thus the OpenStreetMap ecosystem.

“Machine learning datasets for OSM need to be open. We need to go beyond identifying roads and buildings and open datasets allow us to experiment and uncover new opportunities. Open Building Dataset gives us the ability to not only explore quality and validation aspects, but also advance how ML data assists mapping.”
– Tyler Radford (Executive Director, Humanitarian OpenStreetMap Team)

Africa presented several challenges: stark difference in landscape from the United States or Canada, unique settlements such as Tukuls, dense urban areas with connected structures, imagery quality and vintage, and lack of training data in rural areas. The team identified areas with poor recall by leveraging population estimates from CIESIN. Subsequent targeted labeling efforts across Bing Maps and HOT improved model recall especially in rural areas. A two-step process with semantic segmentation followed by polygonization resulted in 18M building footprints — 7M in Uganda and 11M in Tanzania.

Extractions Musoma, TanzaniaExtractions in Musoma, Tanzania

Bing Maps is making this data open for download free of charge and usable for research, analysis and of course, OSM. In OpenStreetMap there are currently 14M building footprints in Uganda and Tanzania (the last time our team counted). We are working to determine overlaps.

We will be making the data available on Github to download. The CNTK toolkit developed by Microsoft is open source and available on GitHub as well. The ResNet3 model is also open source and available on GitHub. The Bing Maps computer vision team will be presenting the work in Africa at the annual International State of the Map conference in Heidelberg, Germany and at the HOT Summit.

– Bing Maps Team

Go to Original Article
Author: Microsoft News Center

DerbyCon attendees and co-founder reflect on the end

After nine years running, DerbyCon held its ninth and final show, and attendees and a co-founder looked back on the conference and discussed plans to continue the community with smaller groups around the world.

DerbyCon was one of the more popular small-scale hacker conferences held in the U.S., but organizers surprised the infosec community in January by announcing DerbyCon 9 would be the last one. The news came after multiple attendee allegations of mistreatment by the volunteer security staff and inaction regarding the safety of attendees.

Dave Kennedy, co-founder of DerbyCon, founder of TrustedSec LLC and co-founder of Binary Defense Systems, did not comment on specific allegations at the time and said the reason for the conference coming to an end was that the conference had gotten too big and there was a growing “toxic environment” created by a small group of people “creating negativity, polarization and disruption.”

Kennedy claimed in a recent interview that DerbyCon “never really had any major security incidents where we weren’t able to handle the situation quickly and de-escalate at the conference with our security staff.”

Roxy Dee, a vulnerability management specialist, who has been outspoken about the safety for women at DerbyCon, told SearchSecurity that “it’s highly irresponsible to paint it as a great conference” given the past allegations and what she described as a lack of response from conference organizers.  

Despite these past controversies, attendees praised DerbyCon 9, held in Louisville, Ky from Sept. 6 to 8 this year, there have been no major complaints, and Kennedy told SearchSecurity it was everything the team wanted for the last year and “went better than any other year I can remember.”

“When we started this conference we had no idea what we were doing or how to run a conference. We went from that to one of the most impactful family conferences in the world,” Kennedy said. “It’s been a lot of work, a lot of time and effort, but at the end of the day we accomplished everything we wanted to get out of the conference and then some. Family, community and friendship. It was an incredible experience and one that I’ll miss for sure.”

As a joke, someone handed Kennedy a paper during the conference reading “DerbyCon 10” and the image quickly circled the conference via Twitter. Kennedy admitted he and all of the organizers “struggled with ending DerbyCon this year or not, but we were all really burned out.”

“When we decided, it was from all of us that it was the right direction and the right time to go on a high note. We didn’t have any doubts at all this year that there would ever be another DerbyCon. This is it for us and we ended on a high note that was both memorable and magical to us,” Kennedy said. “The attendees, staff, speakers and everyone were just absolutely incredible. Thank you all to who made DerbyCon possibly and for growing an amazing community.”

The legacy of DerbyCon

Kennedy told SearchSecurity that his inspiration for fostering the DerbyCon community initially was David Logan’s Tribal Leadership, “which talks about growing a tribe based on a specific culture.

“A culture for a conference can be developed if we try hard enough and I think our success was we really focused on that family and community culture with DerbyCon,” Kennedy said. “A conference is a direct representation of the people that put it on, and we luckily were able to establish a culture early on that was sorely needed in the INFOSEC space.”

April C. Wright, security consultant at ArchitectSecurity.org, said in her years attending, DerbyCon provided a “wonderful environment with tons of positivity and personality.”

“I met my best friend there. I can’t describe how much good there was going on, from raising money for charity to knowledge sharing to welcoming first-time attendees,” Wright said. “The quality of content and villages were world class. The volunteers and staff have always been friendly and kind. It was in my top list of cons worldwide.”

Eric Beck, a pen-tester and web app security specialist, said the special part about DerbyCon was a genuine effort to run contrary to the traditional infosec community view that “you can pwn or you can’t.”

“We all start somewhere, we all have different strengths and weaknesses and everyone has a seat at the table. Dave [Kennedy], set a welcoming tone and it meant that people that might otherwise hesitate took that first step. And that first step is always the hardest,” Beck said. “DerbCon was my infosec home base and where I recharged my batteries and I don’t know who or what can fill its shoes. I have a kiddo I thought I’d share this conference with and met people I assumed I’d see annually. I’m personally determined to contribute more in infosec and make the effort to reach out, but I have a difficult time imaging being part of something that brought in the caliber of talent and the sense of welcoming that this conference did.”

Danny Akacki, senior technical account manager with Gigamon Insight, said his first time attending was DerbyCon 6 and the moment he walked in to the venue he “fell in love with the vibe of that place and those people.”

“I still didn’t know too many people but I swear to god it didn’t matter. I made so many friends that weekend and I had the hardest bout of post-con blues I’ve ever experienced, which is a testament to just how profound an effect that year had on me,” Akacki said. “I had to skip 7, but made it to 8 and 9. Every year I went back, it felt like only a day had passed since the last visit because that experience and those people stay with you every day.” 

For Alethe Denis, founder of Dragonfly Security, DerbyCon 9 was her first time attending and she said the experience was everything she expected and more.

“The atmosphere was like a sleepover, compared to the giant summer camp that is DEF CON, and I really enjoyed that aspect of it. It felt like it was a weekend getaway with friends and the lack of casinos was appreciated. But I don’t feel that the quality of the talks and availability of villages was sacrificed in the least,” Denis said. “Even as small as Derby is, it was really tough to do everything I wanted to do because there were so many interesting options available. I feel like it brought only the best elements of the DEF CON type community and DEF CON conference to the Midwest.”

Micah Brown, security engineer at American Modern Insurance Group and vice president of the Greater Cincinnati ISSA chapter, echoed the sentiments of brother/sisterhood at DerbyCon and the cheerfulness of the conference and added another key tenet: Charity.

“One of the key tenets of DerbyCon has always been giving back. During the closing ceremonies, it was revealed that over the past 9 years, DerbyCon and the attendees have given over $700,000 to charity. That does not count the hours of people’s lives that go into making the presentations, the tools, the training that are freely distributed each year. Nor does it factor in the personal relationships and mentorships that are established and progress our community,” Brown said. “It was after my first DerbyCon I volunteered to be the Director of Education for the Greater Cincinnati ISSA Chapter and after my second DerbyCon I volunteered to be the Vice President of the Chapter. DerbyCon has also inspired me to give back by sharing my knowledge through giving my own presentations, including the honor to give back to the DerbyCon community with my own talk this year.”

Beyond DerbyCon

Xena Olsen, cyberthreat intelligence analyst in the financial services industry, attended the last two years of DerbyCon and credited the “community and sense of belonging” there with encouraging her to continue learning and leading her to now being a cybersecurity PhD student at Marymount University.

“The DerbyCon Communities initiative will hopefully serve as a means for people to experience the DerbyCon culture around the world,” Olsen said. “As far as a conference taking the place of DerbyCon, I’m not sure that’s possible. But other conferences can adopt similar values of community and inclusiveness, knowledge sharing and charity.” 

Wright said she has seen other conferences with similar personality and passion, “but none have really captured the heart of DerbyCon.”

“There are a lot of great regional cons in the U.S. that I think more people will start going to. They are affordable and easily accessed, with the small-con feel — as opposed to the mega-con vibe of ‘Hacker Summer camp’,” Wright said, referencing the week in Las Vegas that includes Black Hat, DEF CON, BSides Las Vegas, Diana Con and QueerCon plus other events, meetups and parties. “I don’t think anyone can fill the space left by DerbyCon, but I do think each will continue with its own set of ways and personality.”

Akacki was adamant that “no other con will ever take Derby’s place.”

“It burned fast and it burned bright. It was lighting in a bottle, never to be seen again. However, I’m not sad,” Akacki said. “I can’t even say that its vibe is rising from the ashes, because it would have to have burned down for that to happen. The fire that is the spirit of DerbyCon still burns and, I’d argue, it burns brighter than ever.”

I’m not sure any other con will be able to truly capture that magic and fill the space left by Derby.
Alethe DenisFounder, Dragonfly Security

Denis said it will be difficult for any conference to truly replace DerbyCon.

“I feel like the people who organized and were passionate about DerbyCon are what made Derby unique. I’m not sure any other con will be able to truly capture that magic and fill the space left by Derby,” Denis said. “But I guess that remains to be seen and hope that more cons, such as Blue Team Con in June 2020 in Chicago bring high quality content and engaging talks to the Midwest in the future.”

Wright noted that some of her favorite smaller security conferences included GRRcon, NOLAcon, CircleCityCon, CypherCon, Showmecon, Toorcon and [Wild West Hackin’ Fest], and she expressed hope that the proposed “DerbyCon Communities” project “will help with the void left by the end of the era of the original DerbyCon.”

The DerbyCon Communities initiative

The organizers saw DerbyCon growing fast, but “didn’t want to turn the conference into such a large production like DEF CON,” Kennedy told SearchSecurity.

“We wanted to go back to why DerbyCon was so successful and that was due to three core principles: Posivitiy and Inclusiveness, Knowledge Sharing and Charity. There is a direct need for a community to help new people in the industry and help charity at the same time,” Kennedy said. “The goal for the Communities initiative is to bring people together the same way DerbyCon did for one common goal.”

Kennedy also confirmed that there will be some involvement with the Communities initiative from the “core group” of organizers, including his wife Erin, Martin Bos and others.

Akacki said that with the local Derby Communities initiative, “the spirit of Derby has exploded into stardust, covering our universe.”

“You can’t kill what we’ve built, you can’t contain it and you can’t stop it,” Akacki said. “I’m not crying because it ended, I’m smiling and laughing … because it just became bigger than ever.”

On Sept. 11, Kennedy pitched the full idea of DerbyCon Communities to the team and said there should be four main areas of focus:

  • Chapter Groups
    • Independently run with chapter heads
    • Geographically placed
    • Volunteer network
  • Established Groups
    • Partner with similar groups that meet criteria and approval process to join DerbyCon network.
  • Conferences
    • Established or new. Allow for new conferences to be created.
  • Kids
    • Programs geared towards teaching next-gen children.

Ultimately, Kennedy told SearchSecurity he wants new groups to “be welcoming and accepting of new people and making a difference and impact in their local communities or worldwide.”

“Our hope is that not only do DerbyCon Chapters spawn up, but other conferences and chapter groups will join forces to create a DerbyCon network of sorts to grow this community in a positive way.”

Go to Original Article
Author:

For Sale – iMac 27 Retina 5k – Boxed & High Spec – Late 2015 – 2TB – R9 M395

iMac 27 Retina 5k Late 2015

Bought in Nov 2016, only 2.5 years old and hardly used.

– Radeon R9 M395 graphics with 2GB GDDR 5 memory
– 2TB Fusion drive
– i5 3.3Ghz quad core (Turbo Boost up to 3.9GHz)
– 8GB ram (user upgradable)
– 5k Retina display

Comes in original box with all accessories and packaging.

High spec model, perfect for video editing, gaming, home office, etc.

– like new condition, hardly used in home office. 1 owner from new
– no scratches or damage. Never modified, overclocked or upgraded
– screen is in perfect condition, no edge bleed, no dead pixels
– ram can be user upgraded; going to 16GB can be done for about £45
– comes in original box and all accessories such as mouse, cable and keyboard
– updated to latest MacOS, will be wiped and find my computer disabled
– selling as I have purchased a MacBook

Any questions let me know, no silly offers please. These typically sell around the £1000 mark, and this is a high end model with R9 M395 graphics. Can deliver within reason or meet in central or east London.

Price and currency: 900
Delivery: Goods must be exchanged in person
Payment method: Cash/transfer
Location: London
Advertised elsewhere?: Advertised elsewhere
Prefer goods collected?: I prefer the goods to be collected

______________________________________________________
This message is automatically inserted in all classifieds forum threads.
By replying to this thread you agree to abide by the trading rules detailed here.
Please be advised, all buyers and sellers should satisfy themselves that the other party is genuine by providing the following via private conversation to each other after negotiations are complete and prior to dispatching goods and making payment:

  • Landline telephone number. Make a call to check out the area code and number are correct, too
  • Name and address including postcode
  • Valid e-mail address

DO NOT proceed with a deal until you are completely satisfied with all details being correct. It’s in your best interest to check out these details yourself.

Go to Original Article
Author:

For Sale – iMac 27 Retina 5k – Boxed & High Spec – Late 2015 – 2TB – R9 M395

iMac 27 Retina 5k Late 2015

Bought in Nov 2016, only 2.5 years old and hardly used.

– Radeon R9 M395 graphics with 2GB GDDR 5 memory
– 2TB Fusion drive
– i5 3.3Ghz quad core (Turbo Boost up to 3.9GHz)
– 8GB ram (user upgradable)
– 5k Retina display

Comes in original box with all accessories and packaging.

High spec model, perfect for video editing, gaming, home office, etc.

– like new condition, hardly used in home office. 1 owner from new
– no scratches or damage. Never modified, overclocked or upgraded
– screen is in perfect condition, no edge bleed, no dead pixels
– ram can be user upgraded; going to 16GB can be done for about £45
– comes in original box and all accessories such as mouse, cable and keyboard
– updated to latest MacOS, will be wiped and find my computer disabled
– selling as I have purchased a MacBook

Any questions let me know, no silly offers please. These typically sell around the £1000 mark, and this is a high end model with R9 M395 graphics. Can deliver within reason or meet in central or east London.

Price and currency: 900
Delivery: Goods must be exchanged in person
Payment method: Cash/transfer
Location: London
Advertised elsewhere?: Advertised elsewhere
Prefer goods collected?: I prefer the goods to be collected

______________________________________________________
This message is automatically inserted in all classifieds forum threads.
By replying to this thread you agree to abide by the trading rules detailed here.
Please be advised, all buyers and sellers should satisfy themselves that the other party is genuine by providing the following via private conversation to each other after negotiations are complete and prior to dispatching goods and making payment:

  • Landline telephone number. Make a call to check out the area code and number are correct, too
  • Name and address including postcode
  • Valid e-mail address

DO NOT proceed with a deal until you are completely satisfied with all details being correct. It’s in your best interest to check out these details yourself.

Go to Original Article
Author:

Discover how Microsoft 365 can help health providers adapt in an era of patient data protection and sharing

For years, patient data management meant one thing—secure the data. Now, healthcare leaders must protect and openly share the data with patients and with other healthcare organizations to support quality of care, patient safety, and cost reduction. As data flows more freely, following the patient, there’s less risk of redundant testing that increases cost and waste. Legacy infrastructure and cybersecurity concerns stand on the critical path to greater interoperability and patient record portability. Learn how Microsoft 365 can help.

Impact of regulatory changes and market forces

Regulatory changes are a big driver for this shift. Through regulations like the 21st Century Cures Act in the United States, healthcare organizations are required to improve their capabilities to protect and share patient data. The General Data Protection Regulation (GDPR) in the European Union expands the rights of data subjects over their data. Failing to share patient data in an effective, timely, and secure manner can result in significant penalties for providers and for healthcare payors.

Market forces are another driver of this shift as consumers’ expectations of omni-channel service and access spill over to healthcare. This augurs well for making the patient more central to data flows.

There are unintended consequences, however. The increasing need to openly share data creates new opportunities for hackers to explore, and new risks for health organizations to manage.

It’s more important than ever to have a data governance and proactive cybersecurity strategy that enables free data flow with an optimal security posture. In fact, government regulators will penalize healthcare organizations for non-compliance—and so will the marketplace.

How Microsoft 365 can prepare your organization for the journey ahead

Modernizing legacy systems and processes is a daunting, expensive task. Navigating a digitized but siloed information system is costly, impedes clinician workflow, and complicates patient safety goals.

To this end, Microsoft Teams enables the integration of electronic health record information and other health data, allowing care teams to communicate and collaborate about patient care in real-time. Leading interoperability partners continue to build the ability to integrate electronic health records into Teams through a FHIR interface. With Teams, clinical workers can securely access patient information, chat with other team members, and even have modern meeting experiences, all without having to switch between apps.

Incomplete data and documentation are among the biggest sources of provider and patient dissatisfaction. Clinicians value the ability to communicate with each other securely and swiftly to deliver the best informed care at point of care.

Teams now offers new secure messaging capabilities, including priority notifications and message delegation, as well as a smart camera with image annotation and secure sharing, so images stay in Teams and aren’t stored to the clinician’s device image gallery.

Image of phone screens showing priority notifications and message delegation.

What about cybersecurity and patient data? As legacy infrastructure gives way to more seamless data flow, it’s important to protect against a favorite tactic of cyber criminals—phishing.

Phishing emails—weaponized emails that appear to come from a reputable source or person—are increasingly difficult to detect. As regulatory pressure mounts within healthcare organizations to not “block” access to data, the risk of falling for such phishing attacks is expected to increase. To help mitigate this trend, Office 365 Advanced Threat Protection (ATP) has a cloud-based email filtering service with sophisticated anti-phishing capabilities.

For example, Office 365 ATP provides real-time detonation capabilities to find and block unknown threats, including malicious links and attachments. Links in email are continuously evaluated for user safety. Similarly, any attachments in email are tested for malware and unsafe attachments are removed.

Image of a message appearing on a tablet screen showing a website that has been classified as malicious.

For data to flow freely, it’s important to apply the right governance and protection to sensitive data. And that is premised on appropriate data classification. Microsoft 365 helps organizations find and classify sensitive data across a variety of locations, including devices, apps, and cloud services with Microsoft Information Protection. Administrators need to know that sensitive data is accessed by authorized personnel only. Microsoft 365, through Azure Active Directory (Azure AD), enables capabilities like Multi-Factor Authentication (MFA) and conditional access policies to minimize the risk of unauthorized access to sensitive patient information.

For example, if a user or device sign-in is tagged as high-risk, Azure AD can automatically enforce conditional access policies that can limit or block access or require the user to re-authenticate via MFA. Benefitting from the integrated signals of the Microsoft Intelligent Security Graph, Microsoft 365 solutions look holistically at the user sign-in behavior over time to assess risk and investigate anomalies where needed.

When faced with the prospect of internal leaks, Supervision in Microsoft 365 can help organizations monitor employees’ communications channels to manage compliance and reduce reputational risk from policy violations. As patient data is shared, tracking its flow is essential. Audit log and alerts in Microsoft 365 includes several auditing and reporting features that customers can use to track certain activity such as changes made to documents and other items.

Finally, as you conform with data governance regulatory obligations and audits, Microsoft 365 can assist you in responding to regulators. Advanced eDiscovery and Data Subject Requests (DSRs) capabilities offer the agility and efficiency you need when going through an audit, helping you find relevant patient data or respond to patient information requests.

Using the retention policies of Advanced Data Governance, you can retain core business records in unalterable, compliant formats. With records management capabilities, your core business records can be properly declared and stored with full audit visibility to meet regulatory obligations.

Learn more

Healthcare leaders must adapt quickly to market and regulatory expectations regarding data flows. Clinical and operations leaders depend on data flowing freely to make data-driven business and clinical decisions, to understand patterns in patient care and to constantly improve patient safety, quality of care, and cost management.

Microsoft 365 helps improve workflows through the integration power of Teams, moving the right data to the right place at the right time. Microsoft 365 also helps your security and compliance posture through advanced capabilities that help you manage and protect identity, data, and devices.

Microsoft 365 is the right cloud platform for you in this new era of patient data protection—and data sharing. Check out the Microsoft 365 for health page to learn more about how Microsoft 365 and Teams can empower your healthcare professionals in a modern workplace.

Go to Original Article
Author: Microsoft News Center

What is the Hyper-V Core Scheduler?

In the past few years, sophisticated attackers have targeted vulnerabilities in CPU acceleration techniques. Cache side-channel attacks represent a significant danger. They magnify on a host running multiple virtual machines. One compromised virtual machine can potentially retrieve information held in cache for a thread owned by another virtual machine. To address such concerns, Microsoft developed its new “HyperClear” technology pack. HyperClear implements multiple mitigation strategies. Most of them work behind the scenes and require no administrative effort or education. However, HyperClear also includes the new “core scheduler”, which might need you to take action.

The Classic Scheduler

Now that Hyper-V has all new schedulers, its original has earned the “classic” label. I wrote an article on that scheduler some time ago. The advanced schedulers do not replace the classic scheduler so much as they hone it. So, you need to understand the classic scheduler in order to understand the core scheduler. A brief recap of the earlier article:

  • You assign a specific number of virtual CPUs to a virtual machine. That sets the upper limit on how many threads the virtual machine can actively run.
  • When a virtual machine assigns a thread to a virtual CPU, Hyper-V finds the next available logical processor to operate it.

To keep it simple, imagine that Hyper-V assigns threads in round-robin fashion. Hyper-V does engage additional heuristics, such as trying to keep a thread with its owned memory in the same NUMA node. It also knows about simultaneous multi-threading (SMT) technologies, including Intel’s Hyper-Threading and AMD’s recent advances. That means that the classic scheduler will try to place threads where they can get the most processing power. Frequently, a thread shares a physical core with a completely unrelated thread — perhaps from a different virtual machine.

Risks with the Classic Scheduler

The classic scheduler poses a cross-virtual machine data security risk. It stems from the architectural nature of SMT: a single physical core can run two threads but has only one cache.

Classic SchedulerIn my research, I discovered several attacks in which one thread reads cached information belonging to the other. I did not find any examples of one thread polluting the others’ data. I also did not see anything explicitly preventing that sort of assault.

On a physically installed operating system, you can mitigate these risks with relative ease by leveraging antimalware and following standard defensive practices. Software developers can make use of fencing techniques to protect their threads’ cached data. Virtual environments make things harder because the guest operating systems and binary instructions have no influence on where the hypervisor places threads.

The Core Scheduler

The core scheduler makes one fairly simple change to close the vulnerability of the classic scheduler: it never assigns threads from more than one virtual machine to any physical core. If it can’t assign a second thread from the same VM to the second logical processor, then the scheduler leaves it empty. Even better, it allows the virtual machine to decide which threads can run together.

Hyper-V Core Scheduler

We will move on through implementation of the scheduler before discussing its impact.

Implementing Hyper-V’s Core Scheduler

The core scheduler has two configuration points:

  1. Configure Hyper-V to use the core scheduler
  2. Configure virtual machines to use two threads per virtual core

Many administrators miss that second step. Without it, a VM will always use only one logical processor on its assigned cores. Each virtual machine has its own independent setting.

We will start by changing the scheduler. You can change the scheduler at a command prompt (cmd or PowerShell) or by using Windows Admin Center.

How to Use the Command Prompt to Enable and Verify the Hyper-V Core Scheduler

For Windows and Hyper-V Server 2019, you do not need to do anything at the hypervisor level. You still need to set the virtual machines. For Windows and Hyper-V Server 2016, you must manually switch the scheduler type.

You can make the change at an elevated command prompt (PowerShell prompt is fine):

Note: if bcdedit does not accept the setting, ensure that you have patched the operating system.

Reboot the host to enact the change. If you want to revert to the classic scheduler, use “classic” instead of “core”. You can also select the “root” scheduler, which is intended for use with Windows 10 and will not be discussed further here.

To verify the scheduler, just run bcdedit by itself and look at the last line:

bcdedit

bcdedit will show the scheduler type by name. It will always appear, even if you disable SMT in the host’s BIOS/UEFI configuration.

How to Use Windows Admin Center to Enable the Hyper-V Core Scheduler

Alternatively, you can use Windows Admin Center to change the scheduler.

  1. Use Windows Admin Center to open the target Hyper-V host.
  2. At the lower left, click Settings. In most browsers, it will hide behind any URL tooltip you might have visible. Move your mouse to the lower left corner and it should reveal itself.
  3. Under Hyper-V Host Settings sub-menu, click General.
  4. Underneath the path options, you will see Hypervisor Scheduler Type. Choose your desired option. If you make a change, WAC will prompt you to reboot the host.

windows admin center

Note: If you do not see an option to change the scheduler, check that:

  • You have a current version of Windows Admin Center
  • The host has SMT enabled
  • The host runs at least Windows Server 2016

The scheduler type can change even if SMT is disabled on the host. However, you will need to use bcdedit to see it (see previous sub-section).

Implementing SMT on Hyper-V Virtual Machines

With the core scheduler enabled, virtual machines can no longer depend on Hyper-V to make the choice to use a core’s second logical processor. Hyper-V will expect virtual machines to decide when to use the SMT capabilities of a core. So, you must enable or disable SMT capabilities on each virtual machine just like you would for a physical host.

Because of the way this technology developed, the defaults and possible settings may seem unintuitive. New in 2019, newly-created virtual machines can automatically detect the SMT status of the host and hypervisor and use that topology. Basically, they act like a physical host that ships with Hyper-Threaded CPUs — they automatically use it. Virtual machines from previous versions need a bit more help.

Every virtual machine has a setting named “HwThreadsPerCore”. The property belongs to the Msvm_ProcessorSettingData CIM class, which connects to the virtual machine via its Msvm_Processor associated instance. You can drill down through the CIM API using the following PowerShell (don’t forget to change the virtual machine name):

The output of the cmdlet will present one line per virtual CPU. If you’re worried that you can only access them via this verbose technique hang in there! I only wanted to show you where this information lives on the system. You have several easier ways to get to and modify the data. I want to finish the explanation first.

The HwThreadsPerCore setting can have three values:

  • 0 means inherit from the host and scheduler topology — limited applicability
  • 1 means 1 thread per core
  • 2 means 2 threads per core

The setting has no other valid values.

A setting of 0 makes everything nice and convenient, but it only works in very specific circumstances. Use the following to determine defaults and setting eligibility:

  • VM config version < 8.0
    • Setting is not present
    • Defaults to 1 if upgraded to VM version 8.x
    • Defaults to 0 if upgraded to VM version 9.0+
  • VM config version 8.x
    • Defaults to 1
    • Cannot use a 0 setting (cannot inherit)
    • Retains its setting if upgraded to VM version 9.0+
  • VM config version 9.x
    • Defaults to 0

I will go over the implications after we talk about checking and changing the setting.

You can see a VM’s configuration version in Hyper-V Manager and PowerShell’s Get-VM :

Hyper-V Manager

The version does affect virtual machine mobility. I will come back to that topic toward the end of the article.

How to Determine a Virtual Machine’s Threads Per Core Count

Fortunately, the built-in Hyper-V PowerShell module provides direct access to the value via the *-VMProcessor cmdlet family. As a bonus, it simplifies the input and output to a single value. Instead of the above, you can simply enter:

If you want to see the value for all VMs:

You can leverage positional parameters and aliases to simplify these for on-the-fly queries:

You can also see the setting in recent version of Hyper-V Manager (Windows Server 2019 and current versions of Windows 10). Look on the NUMA sub-tab of the Processor tab. Find the Hardware threads per core setting:

settings

In Windows Admin Center, access a virtual machine’s Processor tab in its settings. Look for Enable Simultaneous Multithreading (SMT).

processors

If the setting does not appear, then the host does not have SMT enabled.

How to Set a Virtual Machine’s Threads Per Core Count

You can easily change a virtual machine’s hardware thread count. For either the GUI or the PowerShell commands, remember that the virtual machine must be off and you must use one of the following values:

  • 0 = inherit, and only works on 2019+ and current versions of Windows 10 and Windows Server SAC
  • 1 = one thread per hardware core
  • 2 = two threads per hardware core
  • All values above 2 are invalid

To change the setting in the GUI or Windows Admin Center, access the relevant tab as shown in the previous section’s screenshots and modify the setting there. Remember that Windows Admin Center will hide the setting if the host does not have SMT enabled. Windows Admin Center does not allow you to specify a numerical value. If unchecked, it will use a value of 1. If checked, it will use a value of 2 for version 8.x VMs and 0 for version 9.x VMs.

To change the setting in PowerShell:

To change the setting for all VMs in PowerShell:

Note on the cmdlet’s behavior: If the target virtual machine is off, the setting will work silently with any valid value. If the target machine is on and the setting would have no effect, the cmdlet behaves as though it made the change. If the target machine is on and the setting would have made a change, PowerShell will error. You can include the -PassThru parameter to receive the modified vCPU object:

Considerations for Hyper-V’s Core Scheduler

I recommend using the core scheduler in any situation that does not explicitly forbid it. I will not ask you to blindly take my advice, though. The core scheduler’s security implications matter, but you also need to think about scalability, performance, and compatibility.

Security Implications of the Core Scheduler

This one change instantly nullifies several exploits that could cross virtual machines, most notably in the Spectre category. Do not expect it to serve as a magic bullet, however. In particular, remember that an exploit running inside a virtual machine can still try to break other processes in the same virtual machine. By extension, the core scheduler cannot protect against threats running in the management operating system. It effectively guarantees that these exploits cannot cross partition boundaries.

For the highest level of virtual machine security, use the core scheduler in conjunction with other hardening techniques, particularly Shielded VMs.

Scalability Impact of the Core Scheduler

I have spoken with one person who was left with the impression that the core scheduler does not allow for oversubscription. They called into Microsoft support, and the engineer agreed with that assessment. I reviewed Microsoft’s public documentation as it was at the time, and I understand how they reached that conclusion. Rest assured that you can continue to oversubscribe CPU in Hyper-V. The core scheduler prevents threads owned by separate virtual machines from running simultaneously on the same core. When it starts a thread from a different virtual machine on a core, the scheduler performs a complete context switch.

You will have some reduced scalability due to the performance impact, however.

Performance Impact of the Core Scheduler

On paper, the core scheduler presents severe deleterious effects on performance. It reduces the number of possible run locations for any given thread. Synthetic benchmarks also show a noticeable performance reduction when compared to the classic scheduler. A few points:

  • Generic synthetic CPU benchmarks drive hosts to abnormal levels using atypical loads. In simpler terms, they do not predict real-world outcomes.
  • Physical hosts with low CPU utilization will experience no detectable performance hits.
  • Running the core scheduler on a system with SMT enabled will provide better performance than the classic scheduler on the same system with SMT disabled

Your mileage will vary. No one can accurately predict how a general-purpose system will perform after switching to the core scheduler. Even a heavily-laden processor might not lose anything. Remember that, even in the best case, an SMT-enabled core will not provide more than about a 25% improvement over the same core with SMT disabled. In practice, expect no more than a 10% boost. In the simplest terms: switching from the classic scheduler to the core scheduler might reduce how often you enjoy a 10% boost from SMT’s second logical processor. I expect few systems to lose much by switching to the core scheduler.

Some software vendors provide tools that can simulate a real-world load. Where possible, leverage those. However, unless you dedicate an entire host to guests that only operate that software, you still do not have a clear predictor.

Compatibility Concerns with the Core Scheduler

As you saw throughout the implementation section, a virtual machine’s ability to fully utilize the core scheduler depends on its configuration version. That impacts Hyper-V Replica, Live Migration, Quick Migration, virtual machine import, backup, disaster recovery, and anything else that potentially involves hosts with mismatched versions.

Microsoft drew a line with virtual machine version 5.0, which debuted with Windows Server 2012 R2 (and Windows 8.1). Any newer Hyper-V host can operate virtual machines of its version all the way down to version 5.0. On any system, run  Get-VMHostSupportedVersion to see what it can handle. From a 2019 host:

So, you can freely move version 5.0 VMs between a 2012 R2 host and a 2016 host and a 2019 host. But, a VM must be at least version 8.0 to use the core scheduler at all. So, when a v5.0 VM lands on a host running the core scheduler, it cannot use SMT. I did not uncover any problems when testing an SMT-disabled guest on an SMT-enabled host or vice versa. I even set up two nodes in a cluster, one with Hyper-Threading on and the other with Hyper-Threading off, and moved SMT-enabled and SMT-disabled guests between them without trouble.

The final compatibility verdict: running old virtual machine versions on core-scheduled systems means that you lose a bit of density, but they will operate.

Summary of the Core Scheduler

This is a lot of information to digest, so let’s break it down to its simplest components. The core scheduler provides a strong inter-virtual machine barrier against cache side-channel attacks, such as the Spectre variants. Its implementation requires an overall reduction in the ability to use simultaneous multi-threaded (SMT) cores. Most systems will not suffer a meaningful performance penalty. Virtual machines have their own ability to enable or disable SMT when running on a core-scheduled system. All virtual machine versions prior to 8.0 (WS2016/W10 Anniversary) will only use one logical processor per core when running on a core-scheduled host.

Go to Original Article
Author: Eric Siron

Analyst, author talks enterprise AI expectations

For years, promoters have made AI technologies sound like the all-encompassing technology answer for enterprises, a ready-to-use piece of software that could solve all of an organization’s data and workflow problems with minimal effort.

While AI can certainly automate parts of a workflow and save employees and organizations time and money, it rarely requires no work or no integration to set up, something organizations are still struggling to understand.

In this Q&A ahead of the publication of his new book, Alan Pelz-Sharpe, founder of market advisory and research firm Deep Analysis, describes enterprises’ AI expectations and helps distinguish between realistic AI goals and expectations and the AI hype.

An AI project is different from a traditional IT project, Pelz-Sharpe said, and organizations should treat it as such.

“One of the reasons so many projects fail is people do not know that,” he said.

In this Q&A, Pelz-Sharpe, who is also a part-time voice and film actor, talks about AI expectations, deploying AI, and the realities of the technology.

Have you found that business users and enterprises have an accurate description of AI?

Alan Pelz-Sharpe: No. It’s a straightforward no. I’ll give you real world examples.

Alan Pelz-Sharpe, AI ExpectationsAlan Pelz-Sharpe

A very large, very well-known financial services company brought in the biggest vendor. They spent six and a half million dollars. Four months later, they fired the vendor, because they had nothing to show for it. They talked to me and it was heartbreaking, because I wanted to say to them, ‘Why? Why did you ever engage with them?’

It wasn’t because they were bad people engaged in this, but because they had very specific sets of circumstances and really, really specific requirements. I said to them, ‘You’re never going to buy this off the shelf, it doesn’t exist. You’re going to have to develop this yourself.’ Now, that’s what they’re doing, and they’re spending a lot less money and having a lot more success.

AI is being so overhyped; your heart goes out to buyers, because they don’t know who to believe. In some cases, they could save a fortune, go to some small startup [that] could, frankly, give them the product and get the job done. They don’t know that.

Are these cases of enterprises’ having the wrong AI expectations and not knowing what they want, or they are cases of a vendor misleading a buyer?

It’s absolutely both. Vendors have overhyped and oversold. Then the perception is I buy this tool, I plug it in, and it does its magic. It just isn’t like that ever. It’s never like that. That’s nonsense. So, the vendors are definitely guilty of that, but when haven’t they been?

From the buyer’s perspective, I think there are two things really. One, they don’t know. They don’t understand, they haven’t been told that they have to run this project very differently from an IT project. It’s not a traditional IT project in which you scope it out, decide what you’re going to use, test it and then it goes live. AI isn’t like that. It’s a lifetime investment. You’re never going to completely leave it alone, you’re always going to be involved with it.

Technically, there’s a perception that bigger and more powerful is better. Well, is it? If you’re trying to automatically classify statements versus purchases versus invoices, the usual back office paper stuff, why do you need the most powerful product? Why not, instead, just buy something simple, something that’s designed to do exactly that?

Often, buyers get themselves into deep waters. They buy a big Mack Truck to do what a small tricycle could do. That’s actually really common. Most real-world business use cases are surprisingly narrow and targeted.

Editor’s note: This interview has been edited for clarity and conciseness.

Go to Original Article
Author:

The stories behind Microsoft’s affordable housing initiative | Microsoft On The Issues

The Puget Sound region has been home to Microsoft for more than 30 years. As the company has grown, the area has changed. New industries have brought more jobs, fresh opportunities and greater prosperity. 

But new housing has not kept up with job growth, and the Greater Seattle area has become the sixth most expensive place to live in the United States.  

That means many of the workers who make a community function – such as nurses, police officers, teachers and firefighters – can no longer afford to live in the cities or suburbs where they work. 

Chart showing Job growth compared to housing growth

The problem is particularly acute in the suburban cities around Seattle. Low- and middle-income workers often face long commutes.

[Subscribe to Microsoft on the Issues for more on the topics that matter most.]

Microsoft is committed to helping kick-start solutions to this crisis, and is  investing $500 million to advance affordable housing solutions. Microsoft is also advocating for changes in public policy at city and state levels to address the long-term factors affecting housing affordability.

This commitment is about more than housing. It is about the people who make our communities places we all want to live in.

For more on Microsoft’s initiatives in the Puget Sound region follow @MSFTIssues on Twitter.  

Go to Original Article
Author: Microsoft News Center

Public cloud storage use still in early days for enterprises

Enterprise use of public cloud storage has been growing at a steady pace for years, yet plenty of IT shops remain in the early stages of the journey.

IT managers are well aware of the potential benefits — including total cost of ownership, agility and unlimited capacity on demand — and many face cloud directives. But companies with significant investments in on-premises infrastructure are still exploring the applications where public cloud storage makes the most sense beyond backup, archive and disaster recovery (DR).

Ken Lamb, who oversees resiliency for cloud at JP Morgan, sees the cloud as a good fit, especially when the financial services company needs to get an application to market quickly. Lamb said JP Morgan uses public cloud storage from multiple providers for development and testing, production applications and DR and runs the workloads internally in “production parallel mode.”

JP Morgan’s cloud data footprint is small compared to its overall storage capacity, but Lamb said the company has a large migration plan for Amazon Web Services (AWS).  

“The biggest problem is the way applications interact,” Lamb said. “When you put something in the cloud, you have to think: Is it going to reach back to anything that you have internally? Does it have high communication with other applications? Is it tightly coupled? Is it latency sensitive? Do you have compliance requirements? Those kind of things are key decision areas to say this makes sense or it doesn’t.”

Public cloud storage trends

Enterprise Strategy Group research shows an increase in the number of organizations running production applications in the public cloud, whereas most used it only for backups or archives a few years ago, according to ESG senior analyst Scott Sinclair.  Sinclair said he’s also seeing more companies identify themselves as “cloud-first” in terms of their overall IT strategy, although many are “still beginning their journeys.”

“When you’re an established company that’s been around for decades, you have a data center. You’ve probably got a multi-petabyte environment. Even if you didn’t have to worry about the pain of moving data, you probably wouldn’t ship petabytes to the cloud overnight,” Sinclair said. “They’re reticent unless there is some compelling need. Analytics would be one.”

Organizations' reasons for using cloud infrastructure services
Market research from Enterprise Strategy Group shows how IT uses cloud infrastructure services.

The Hartford has a small percentage of its data in the public cloud. But the Connecticut-based insurance and financial services company plans to use Amazon’s Simple Storage Service (S3) for hundreds of terabytes, if not petabytes, of data from its Hadoop analytics environment, said Stephen Whitlock, who works in cloud operations for compute and storage at The Hartford.

One challenge The Hartford faces in shifting from on-premises Hortonworks Hadoop to Amazon Elastic MapReduce (EMR) is mapping permissions to its large data set, Whitlock said. The company migrated compute instances to the cloud, but the Hadoop Distributed File System (HDFS)-based data remains on premises while the team sorts out the migration to the EMR File System (EMRFS), Amazon’s implementation of HDFS, Whitlock said.

Finishing the Hadoop project is the first priority before The Hartford looks to public cloud storage for other use cases, including “spiky” and “edge” workloads, Whitlock said. He knows costs for network connectivity, bandwidth and data transfers can add up, so the team plans to focus on applications where the cloud can provide the greatest advantage. The Hartford’s on-premises private cloud generally works well for small applications, and the public cloud makes sense for data-driven workloads, such as the analytics engines that “we can’t keep up with,” Whitlock said.

“It was never a use case to say we’re going to take everything and dump it into the cloud,” Whitlock said. “We did the metrics. It just was not cheaper. It’s like a convenience store. You go there when you’re out of something and you don’t want to drive 10 miles to the Costco.”

Moving cloud data back

Capital District Physicians’ Health Plan (CDPHP), a not-for-profit organization based in Albany, NY, learned from experience that the cloud may not be the optimal place for every application. CDPHP launched its cloud initiative in 2014, using AWS for disaster recovery, and soon adopted a cloud-first strategy. However, Howard Fingeroth, director of infrastructure architecture and data engineering at CDPHP, said the organization plans to bring two or three administration and financial applications back to its on-premises data center for cost reasons.

“We did a lot of lift and shift initially, and that didn’t prove to be a real wise choice in some cases,” Fingeroth said. “We’ve now modified our cloud strategy to be what we’re calling ‘smart cloud,’ which is really doing heavy-duty analysis around when it makes sense to move things to the cloud.”

Fingeroth said the cloud helps with what he calls the “ilities”: agility, affordability, flexibility and recoverability. CDPHP primarily uses Amazon’s Elastic Block Storage for production applications that run in the cloud and also has less expensive S3 object storage for backup and DR in conjunction with commercial backup products, he said.  

“As time goes on, people get more sophisticated about the use of the cloud,” said John Webster, a senior partner and analyst at Evaluator Group. “They start with disaster recovery or some easy use case, and once they understand how it works, they start progressing forward.”

Evaluator Group’s most recent hybrid cloud storage survey, conducted in 2018, showed that disaster recovery was the primary use case, followed by data sharing/content repository, test and development, archival storage and data protection, according to Webster. He said about a quarter used the public cloud for analytics and tier 1 applications.

Public cloud expansion

The vice president of strategic technology for a New York-based content creation company said he is considering expanding his use of the public cloud as an alternative to storing data in SAN or NAS systems in photo studios in the U.S and Canada. The VP, who asked that neither he nor his company be named, said his company generates up to a terabyte of data a day. It uses storage from Winchester Systems for primary data and has about 30 TB of “final files” on AWS. He said he is looking into storage gateway options from vendors such as Nasuni and Morro Data to move data more efficiently into a public cloud.

“It’s just a constant headache from an IT perspective,” he said of on-premises storage. “There’s replication. There’s redundancy. There is a lot of cost involved. You need IT people in each location. There is no centralized control over that data. Considering all the labor, ongoing support contacts and the ability to scale without doing capex [with on-premises storage], it’s more cost effective and efficient to be in the cloud.”

Go to Original Article
Author:

For Sale – MSI GTX 1060 GAMING X 6GB

Upgraded to a whole new build, so I’m selling my 1060.
Almost 3 years old, warranty ends in August, can provide proof of purchase (Currys).

Never overclocked, non-smoking household, still works amazingly at 1080p and even tested at 1440p, some titles play incredibly well at various settings (Rocket League, Dota2, Destiny2, GTA V).

Postage not included. I have the original box.

Will update with pictures soon.

Price and currency: £120
Delivery: Delivery cost is not included
Payment method: PPG
Location: Manchester
Advertised elsewhere?: Not advertised elsewhere
Prefer goods collected?: I have no preference

______________________________________________________
This message is automatically inserted in all classifieds forum threads.
By replying to this thread you agree to abide by the trading rules detailed here.
Please be advised, all buyers and sellers should satisfy themselves that the other party is genuine by providing the following via private conversation to each other after negotiations are complete and prior to dispatching goods and making payment:

  • Landline telephone number. Make a call to check out the area code and number are correct, too
  • Name and address including postcode
  • Valid e-mail address

DO NOT proceed with a deal until you are completely satisfied with all details being correct. It’s in your best interest to check out these details yourself.

Go to Original Article
Author: