Tag Archives: Researchers

ZombieLoad v2 disclosed, affects newest Intel chips

Security researchers disclosed a new version of the ZombieLoad attack and warned that Intel’s fixes for the original threat can by bypassed.

The original ZombieLoad attack — a speculative execution exploit that could allow attackers to steal sensitive data from Intel processors — was first announced May 14 as part of a set of microarchitectural data sampling (MDS) attacks that also included RIDL (Rogue In-Flight Data Load) and Fallout. According to the researchers, they first disclosed ZombieLoad v2 to Intel on April 23 with an update on May 10 to communicate that “the attacks work on Cascade Lake CPUs,” Intel’s newest line of processors. However, ZombieLoad v2 was kept under embargo until this week.

“We present a new variant of ZombieLoad that enables the attack on CPUs that include hardware mitigations against MDS in silicon. With Variant 2 (TAA), data can still be leaked on microarchitectures like Cascade Lake where other MDS attacks like RIDL or Fallout are not possible,” the researchers wrote on the ZombieLoad website. “Furthermore, we show that the software-based mitigations in combinations with microcode updates presented as countermeasures against MDS attacks are not sufficient.”

One of the ZombieLoad researchers, Moritz Lipp, PhD candidate in information security at the Graz University of Technology in Austria, told SearchSecurity  the problem with the patch for the initial MDS issues is that it “does not prevent the attack, just makes it harder. It just takes longer as the leakage rate is not that high.”

Lipp added that the team’s relationship with Intel has been improving over the past two years and the extended embargo was a direct result of ZombieLoad v2 affecting Cascade Lake processors.

In an update to the original ZombieLoad research paper, the researchers noted that the main advantage of variant two “is that it also works on machines with hardware fixes for Meltdown,” and noted that the attack requires “the Intel TSX instruction-set extension which is only available on selected CPUs since 2013,” including various Skylake, Kaby Lake, Coffee Lake, Broadwell and Cascade Lake processors.

Intel did not respond to questions regarding ZombieLoad v2 — which the company refers to as TSX Asynchronous Abort (TAA) —  or the original MDS patch, and instead directed SearchSecurity to the company’s November 2019 Intel Platform Update blog post. In that blog post, Jerry Bryant, director of communications for Intel Product Assurance and Security, admitted Intel’s MDS mitigations fell short.

“We believe that the mitigations for TAA and MDS substantively reduce the potential attack surface,” Bryant wrote. “Shortly before this disclosure, however, we confirmed the possibility that some amount of data could still be inferred through a side-channel using these techniques (for TAA, only if TSX is enabled) and will be addressed in future microcode updates.”

In an attached “deep dive,” Intel also admitted the ZombieLoad v2 attack “may expose data from either the current logical processor or from the sibling logical processor on processors with simultaneous multithreading.”

The researchers also noted that with the range of CPUs affected, the attack could be performed both on PCs as well as in the cloud.

“The attack can be mounted in virtualized environments like the cloud as well across hyperthreads, if two virtual machines are each running on one of them,” Lipp told SearchSecurity. “However, typically huge cloud providers don’t schedule virtual machines anymore.”

Chris Goettl, director of product management, security at Ivanti, told SearchSecurity that while the research is interesting, the risks of ZombieLoad are relatively low.

“In a cloud environment a vulnerability like this could allow an attacker to glean information across many companies, true, but we are talking about a needle in a field of haystacks,” Goettl said. “Threat actors have motives and they will drive toward their objectives in most cases as quickly and easily as they possibly can. There are a number of information disclosure vulnerabilities that are going to be far easier to exploit than ZombieLoad.”

Lipp confirmed that in order to ensure the leak of sensitive data an attacker would need to ensure “a victim loads specific data, for instance triggering code that loads passwords in order to authenticate a user, an attacker can leak that.”

Ultimately, Goettl said he would expect Intel to continue to be reactive with side-channel attacks like ZombieLoad until there is “a precipitating event where any of these exploits are used in a real-world attack scenario.”

“The incomplete MDS patch probably says a little about how much effort Intel is putting into resolving the vulnerabilities. They fixed exactly what they were shown was the issue, but didn’t look beyond to see if something more should be done or if that fix could also be circumvented,” Goettle said. “As long as speculative execution remains academic Intel’s approach will likely continue to be reactive rather than proactive.”

Go to Original Article
Author:

Microsoft + The Jackson Laboratory: Using AI to fight cancer

YouTube Video

Biomedical researchers are embracing artificial intelligence to accelerate the implementation of cancer treatments that target patients’ specific genomic profiles, a type of precision medicine that in some cases is more effective than traditional chemotherapy and has fewer side effects.

The potential for this new era of cancer treatment stems from advances in genome sequencing technology that enables researchers to more efficiently discover the specific genomic mutations that drive cancer, and an explosion of research on the development of new drugs that target those mutations.

To harness this potential, researchers at The Jackson Laboratory, an independent, nonprofit biomedical research institution also known as JAX and headquartered in Bar Harbor, Maine, developed a tool to help the global medical and scientific communities stay on top of the continuously growing volume of data generated by advances in genomic research.

The tool, called the Clinical Knowledgebase, or CKB, is a searchable database where subject matter experts store, sort and interpret complex genomic data to improve patient outcomes and share information about clinical trials and treatment options.

The challenge is to find the most relevant cancer-related information from the 4,000 or so biomedical research papers published each day, according to Susan Mockus, the associate director of clinical genomic market development with JAX’s genomic medicine institute in Farmington, Connecticut.

“Because there is so much data and so many complexities, without embracing and incorporating artificial intelligence and machine learning to help in the interpretation of the data, progress will be slow,” she said.

That’s why Mockus and her colleagues at JAX are collaborating with computer scientists working on Microsoft’s Project Hanover who are developing AI technology that enables machines to read complex medical and research documents and highlight the important information they contain.

While this machine reading technology is in the early stages of development, researchers have found they can make progress by narrowing the focus to specific areas such as clinical oncology, explained Peter Lee, corporate vice president of Microsoft Healthcare in Redmond, Washington.

“For something that really matters like cancer treatment where there are thousands of new research papers being published every day, we actually have a shot at having the machine read them all and help a board of cancer specialists answer questions about the latest research,” he said.

Peter Lee stands with arms crossed behind some plants
Peter Lee, corporate vice president of Microsoft Healthcare. Photo by Dan DeLong. 

Curating CKB

Mockus and her colleagues are using Microsoft’s machine reading technology to curate CKB, which stores structured information about genomic mutations that drive cancer, drugs that target cancer genes and the response of patients to those drugs.

One application of this knowledgebase allows oncologists to discover what, if any, matches exist between a patient’s known cancer-related genomic mutations and drugs that target them as they explore and weigh options for treatment, including enrollment in clinical trials for drugs in development.

This information is also useful to translational and clinical researchers, Mockus noted.

The bottleneck is filtering through the more than 4,000 papers published every day in biomedical journals to find the subset of about 200 related to cancer, read them and update CKB with the relevant information on the mutation, drug and patient response.

“What you want is some degree of intelligence incorporated into the system that can go out and not just be efficient, but also be effective and relevant in terms of how it can filter information. That is what Hanover has done,” said Auro Nair, executive vice president of JAX.

The core of Microsoft’s Project Hanover is the capability to comb through the thousands of documents published each day in the biomedical literature and flag and rank all that are potentially relevant to cancer researchers, highlighting, for example, information on gene, mutation, drug and patient response.

Human curators working on CKB are then free to focus on the flagged research papers, validating the accuracy of the highlighted information.

“Our goal is to make the human curators superpowered,” said Hoifung Poon, director of precision health natural language processing with Microsoft’s research organization in Redmond and the lead researcher on Project Hanover.

“With the machine reader, we are able to suggest that this might be a case where a paper is talking about a drug-gene mutation relation that you care about,” Poon explained. “The curator can look at this in context and, in a couple of minutes, say, ‘This is exactly what I want,’ or ‘This is incorrect.’”

Hoifung Poon sits on a yellow chair
Hoifung Poon , director of precision health natural language processing with Microsoft’s research organization, is leading the development of Project Hanover, a machine reading technology. Photo by Jonathan Banks. 

Self supervision

To be successful, Poon and his team need to train machine learning models in such a way that they catch all the potentially relevant information – ensure there are no gaps in content – and, at the same time, weed out irrelevant information sufficiently to make the curation process more efficient.

In traditional machine reading tasks such as finding information about celebrities in news stories, researchers tend to focus on relationships contained within a single sentence, such as a celebrity name and a new movie.

Since this type of information is widespread across news stories, researchers can skip instances that are more challenging such as when the name of the celebrity and movie are mentioned in separate paragraphs, or when the relationship involves more than two pieces of information.

“In biomedicine, you can’t do that because your latest finding may only appear in this single paper and if you skip it, it could be life or death for this patient,” explained Poon. “In this case, you have to tackle some of the hard linguistic challenges head on.”

Poon and his team are taking what they call a self-supervision approach to machine learning in which the model automatically annotates training examples from unlabeled text by leveraging prior knowledge in existing databases and ontologies.

For example, a National Cancer Institute initiative manually compiled information from the biomedical literature on how genes regulate each other but was unable to sustain the effort beyond two years. Poon’s team used the compiled knowledge to automatically label documents and train a machine reader to find new instances of gene regulation.

They took the same approach with public datasets on approved cancer drugs and drugs in clinical trials, among other sources.

This connect-the-dots approach creates a machine learned model that “rarely misses anything” and is precise enough “where we can potentially improve the curation efficiency by a lot,” said Poon.

Collaboration with JAX

The collaboration with JAX allows Poon and his team to validate the effectiveness of Microsoft’s machine reading technology while increasing the efficiency of Mockus and her team as they curate CKB.

“Leveraging the machine reader, we can say here is what we are interested in and it will help to triage and actually rank papers for us that have high clinical significance,” Mockus said. “And then a human goes in to really tease apart the data.”

Over time, feedback from the curators will be used to help train the machine reading technology, making the models more precise and, in turn, making the curators more efficient and allowing the scope of CKB to expand.

“We feel really, really good about this relationship,” said Nair. “Particularly from the standpoint of the impact it can have in providing a very powerful tool to clinicians.”

Related:

John Roach writes about Microsoft research and innovation. Follow him on Twitter.

Go to Original Article
Author: Microsoft News Center

USBAnywhere vulnerabilities put Supermicro servers at risk

Security researchers discovered a set of vulnerabilities in Supermicro servers that could allow threat actors to remotely attack systems as if they had physical access to the USB ports.

Researchers at Eclypsium, based in Beaverton, Ore., discovered flaws in the baseboard management controllers (BMCs) of Supermicro servers and dubbed the set of issues “USBAnywhere.” The researchers said authentication issues put servers at risk because “BMCs are intended to allow administrators to perform out-of-band management of a server, and as a result are highly privileged components.

“The problem stems from several issues in the way that BMCs on Supermicro X9, X10 and X11 platforms implement virtual media, an ability to remotely connect a disk image as a virtual USB CD-ROM or floppy drive. When accessed remotely, the virtual media service allows plaintext authentication, sends most traffic unencrypted, uses a weak encryption algorithm for the rest, and is susceptible to an authentication bypass,” the researchers wrote in a blog post. “These issues allow an attacker to easily gain access to a server, either by capturing a legitimate user’s authentication packet, using default credentials, and in some cases, without any credentials at all.”

The USBAnywhere flaws make it so the virtual USB drive acts in the same way a physical USB would, meaning an attacker could load a new operating system image, deploy malware or disable the target device. However, the researchers noted the attacks would be possible on systems where the BMCs are directly exposed to the internet or if an attacker already has access to a corporate network.

Rick Altherr, principal engineer at Eclypsium, told SearchSecurity, “BMCs are one of the most privileged components on modern servers. Compromise of a BMC practically guarantees compromise of the host system as well.”

Eclypsium said there are currently “at least 47,000 systems with their BMCs exposed to the internet and using the relevant protocol.” These systems would be at additional risk because BMCs are rarely powered off and the authentication bypass vulnerability can persist unless the system is turned off or loses power.

Altherr said he found the USBAnywhere vulnerabilities because he “was curious how virtual media was implemented across various BMC implementations,” but Eclypsium found that only Supermicro systems were affected.

According to the blog post, Eclypsium reported the USBAnywhere flaws to Supermicro on June 19 and provided additional information on July 9, but Supermicro did not acknowledge the reports until July 29.

“Supermicro engaged with Eclypsium to understand the vulnerabilities and develop fixes. Supermicro was responsive throughout and worked to coordinate availability of firmware updates to coincide with public disclosure,” Altherr said. “While there is always room for improvement, Supermicro responded in a way that produced an amicable outcome for all involved.”

Altherr added that customers should “treat BMCs as a vulnerable device. Put them on an isolated network and restrict access to only IT staff that need to interact with them.”

Supermicro noted in its security advisory that isolating BMCs from the internet would reduce the risk to USBAnywhere but not eliminate the threat entirely . Firmware updates are currently available for affected Supermicro systems, and in addition to updating, Supermicro advised users to disable virtual media by blocking TCP port 623.

Go to Original Article
Author:

Microsoft Investigator Fellowship seeks PhD faculty submissions

August 1, 2019 | By Jamie Harper, Vice-President, US Education

Microsoft is expanding its support for academic researchers through the new Microsoft Investigator Fellowship. This fellowship is designed to empower researchers of all disciplines who plan to make an impact with research and teaching using the Microsoft Azure cloud computing platform.

From predicting traffic jams to advancing the Internet of Things, Azure has continued to evolve with the times, and this fellowship aims to keep Azure at the forefront of new ideas in the cloud computing space. Similarly evolving, Microsoft fellowships have a long history of supporting researchers, seeking to promote diversity and promising academic research in the field of computing. This fellowship is an addition to this legacy that highlights the significance of Azure in education, both now and into the future.

Full-time faculty at degree-granting colleges or universities in the United States who hold PhDs are eligible to apply. This fellowship supports faculty who are currently conducting research, advising graduate students, teaching in a classroom, and plan to or currently use Microsoft Azure in research, teaching, or both.

Fellows will receive $100,000 annually for two years to support their research. Fellows will also be invited to attend multiple events during this time, where they will make connections with other faculty from leading universities and Microsoft. They will have the opportunity to participate in the greater academic community as well. Members of the cohort will also be offered various training and certification opportunities.

When reviewing the submissions, Microsoft will evaluate the proposed future research and teaching impact of Azure. This will include consideration of how the Microsoft Azure cloud computing platform will be leveraged in size, scope, or unique ways for research, teaching, or both.

Candidates should submit their proposals directly on the fellowship website by August 16, 2019. Recipients will be announced in September 2019.

We encourage you to submit your proposal! For more information on the Microsoft Investigator Fellowship, please check out the fellowship website.

Go to Original Article
Author: Microsoft News Center

Zoom security issues leave vendor scrambling

Zoom was caught flatfooted this week by the reaction to a security researcher’s report on the vulnerabilities of a web server it had quietly installed on Apple computers. The debacle raised broader questions on whether unified communications vendors were too quick to sacrifice privacy and security for ease of use.

The Zoom security issue stemmed from the use of the web server as a workaround for a privacy feature on version 12 of the Safari web browser, which Apple released for the Mac last fall. The feature forced users to consent to open Zoom’s video app every time they tried to join a meeting. In contrast, browsers like Chrome and Firefox let users check a box telling them to automatically trust Zoom’s app in the future.

Zoom felt the extra click in Safari would undermine its frictionless experience for joining meetings, so it installed the web server on Mac computers to launch a meeting immediately.

That left Mac users vulnerable to being instantly joined to a Zoom meeting by clicking on a spam link or loading a malicious website or pop-up advertisement. A similar risk still exists for all Mac and PC users who choose to have their web browsers automatically launch Zoom.

Another issue with the Mac web server was that it would remain in place even after users deleted the Zoom app, and would automatically reinstall Zoom upon receiving a request to join a meeting, according to the security researcher. It also created an avenue for denial-of-service attacks, a risk that Zoom released an optional patch for in May.

In a broader sense, the permanent installation of a web server on local devices troubled independent researcher Jonathan Leitschuh, who sparked this week’s events with a blog post Monday.

“First off, let me start off by saying having an installed app that is running a web server on my local machine with a totally undocumented API feels incredibly sketchy to me,” Leitschuh wrote in his public disclosure. “Secondly, the fact that any website that I visit can interact with this web server running on my machine is a huge red flag for me as a security researcher.”

Leitschuh’s disclosure forced Zoom to issue multiple statements as user outrage grew. The security threat received widespread international news coverage, with many headlines containing the chilling combination of “hacker” and “webcam.” In an interview Wednesday, Zoom’s chief information security officer, Richard Farley, said the news coverage caused “maybe some panic that was unnecessary.”

“Part of the challenge for us, of course, is controlling that message out there that this was not as big a deal as it’s been made out to be,” Farley said. “There’s a lot of misinformation that went out there. … People just didn’t understand it.”

Zoom initially tried to assuage fears about the Mac web server without removing it. The company pointed out that it would be obvious to users they had just joined a meeting because a window would open in the foreground and their webcam’s indicator light would flash on. Also, a hacker couldn’t gain access to a webcam in secret or retain access to that video feed after users exited a meeting.  

Ultimately, Zoom reversed its original position and released a software update Tuesday that removed the web server from its Mac architecture. The next day, Apple pushed out a software patch that wiped the web server from all Mac devices, even for users who had previously deleted Zoom.

“We misjudged the situation and did not respond quickly enough — and that’s on us,” Zoom CEO Eric Yuan wrote in a blog post. “We take full ownership, and we’ve learned a great deal.”

Zoom’s default preferences added fuel to the fire. Unless users go out of their way to alter Zoom’s out-of-the-box settings, their webcams will be on by default when joining meetings. Also, Zoom does not by default have a pre-meeting lobby in which users confirm their audio and video settings before connecting.

Zoom said it would release an update over the July 13 weekend to make it easier for new users to control video settings. The first time a user joins a meeting, they will be able to instruct the app to join them to all future sessions with their webcams turned off.

Zoom has also taken heat for allowing embedded IFrame codes to launch Zoom meetings. In a statement, the company said IFrames — a method for adding HTML content to webpages — was necessary to support its integrations.

Leitschuh first raised the security issues with Zoom in March. The company invited him to its private bug bounty program, offering money in exchange for Leitschuh agreeing not to disclose his research publicly. Leitschuh, who said the proposed bounty was less than $1,000, declined because of the demand for secrecy.

Despite clashing over whether to remove the web server, Leitschuh and Zoom were able to agree on the severity of the risk it posed. They gave it a Common Vulnerability Scoring System rating of 5.4 out of 10. That score is in the “medium” range — riskier than “low” but not as severe as “high” or “critical.”

Zoom’s response to Leitschuh’s concerns was an indicator that companies have to verify the security architectures of UC vendors, analysts said.

“This event should be a clear reminder to both vendors and customers using UC and collaboration tools that there are very real threats to their platforms,” said Michael Brandenburg, analyst at Frost & Sullivan. “We are long past the days of only having to worry about toll fraud, and businesses have to be as mindful of the security risks on their UC platforms as they are with any other business application.”

Go to Original Article
Author:

Researchers bring back cold boot attacks on modern computers

It’s 2008 all over again as researchers have found a way to leverage cold boot attacks against modern computers to steal sensitive data from lost or stolen devices.

Olle Segerdahl and Pasi Saarinen, security consultants for F-Secure, developed the new cold boot attack method and claim it “will work against nearly all modern computers,” including both Windows and MacOS devices.

In classic cold boot attacks, threat actors could recover data stored in RAM after a computer was improperly shut down, but modern operating systems have mitigations against this by way of overwriting RAM. Segerdahl and Saarinen found a way to disable this feature.

“It takes some extra steps compared to the classic cold boot attack, but it’s effective against all the modern laptops we’ve tested,” Segerdahl said in a written press statement. “And since this type of threat is primarily relevant in scenarios where devices are stolen or illicitly obtained, it’s the kind of thing an attacker will have plenty of time to execute.”

Segerdahl and Saarinen developed a tool that could re-write the mitigation settings in memory, which would disable memory overwriting and allow them to boot from an external device that could read the target system’s memory. The researchers said cold boot attacks like this could be used to steal sensitive data like credentials or even encryption keys held in memory.

“It’s not exactly easy to do, but it’s not a hard enough issue to find and exploit for us to ignore the probability that some attackers have already figured this out,” Segerdahl said in a statement. “It’s not exactly the kind of thing that attackers looking for easy targets will use. But it is the kind of thing that attackers looking for bigger phish, like a bank or large enterprise, will know how to use.”

The researchers said cold boot attacks like this could provide a consistent way for threat actors to steal data because it works across platform. And although the researchers have shared their findings with Microsoft, Intel and Apple, mitigations are still a work in progress.

Apple claims that Macs with the T2 chip are immune to cold boot attacks — though this only includes the iMac Pro and 2018 MacBook Pro models — and suggested users with other Mac devices set a firmware password. Microsoft updated Bitlocker guidance to help users protect sensitive information.

Trend Micro apps on Mac accused of stealing data

Researchers charged that multiple apps in the Mac App Store were stealing data and Apple removed the offending apps from the store, but now Trend Micro is refuting the claims against its apps.

At least eight apps — six Trend Micro apps and two published by a developer who goes by the name “Yongming Zhang” — were found to be gathering data, including web browsing history, App Store browsing history and a list of installed apps, from user systems. Reports about the apps potentially stealing data first appeared on the Malwarebytes forum in late 2017, but the issues were confirmed recently by at least three individuals: Patrick Wardle, CEO and founder of Digita Security, a security researcher based in Germany who goes by the Twitter handle @privacyis1st, and Thomas Reed, director of Mac and mobile at Malwarebytes Labs.

Wardle dug into claims by @privacyis1st that the number four ranked paid app, published by “Yongming Zhang” in the Mac App Store — Adware Doctor — was stealing data. At first Wardle saw the app was behaving normally until it came time to “clean” the user system, when he observed the app stealing browser history data and a list of installed apps.

“From a security and privacy point of view, one of the main benefits of installing applications from the official Mac App Store is that such applications are sandboxed. (The other benefit is that Apple supposedly vets all submitted applications – but as we’ve clearly shown here, they (sometimes?) do a miserable job),” Wardle wrote in a blog post. “When an application runs inside a sandbox it is constrained by what files or user information it can access. For example, a sandboxed application from the Mac App Store should not be able to access a user’s sensitive browser history. But Adware Doctor clearly found [a way].”

Trend Micro apps and company response

Adware Doctor and another app — Open Any Files: RAR Support — were developed by an unknown developer whose identity is based on the name of a notorious Chinese serial killer, Zhang Yongming, who was executed in 2013 after being convicted on killing 11 boys and young men. In addition to these apps stealing data, Reed noted in his analysis that at least two Trend Micro apps appeared to be acting improperly.

Reed said he “saw the same data being collected and also uploaded in a file named file.zip to the same URL used by Open Any Files” in the app Dr. Antivirus. Reed said Open Any Files and the Trend Micro apps were uploading the zip file to Trend Micro servers.

“Unfortunately, other apps by the same developer are also collecting this data. We observed the same data being collected by Dr. Cleaner, minus the list of installed applications,” Reed wrote in his analysis. “There is really no good reason for a ‘cleaning’ app to be collecting this kind of user data, even if the users were informed, which was not the case.”

Trend Micro admitted that its apps — Dr Cleaner, Dr Cleaner Pro, Dr. Antivirus, Dr. Unarchiver, Dr. Battery, and Duplicate Finder — were removed from the Mac App Store, but denied that the apps were “stealing” data and sending that data to Chinese servers.

The company said in its response that the Trend Micro apps were collecting and uploading “a small snapshot of the browser history on a one-time basis, covering the 24 hours prior to installation,” but claimed this functionality was “for security purposes” and that the actions were permitted by users as part of the EULA agreed to on installation.

Trend Micro linked to a support page for Dr. Cleaner that showed browser history as one of the types of data collected with user permission, but Reed said on Twitter that he kept archived copies of the apps and he did not find any in-app notifications about data collection.

Despite denying any wrongdoing, Trend Micro said it was taking steps to “reassure” users that their data was safe.

“First, we have completed the removal of browser collection features across our consumer products in question. Second, we have permanently dumped all legacy logs, which were stored on US-based AWS servers. This includes the one-time 24 hour log of browser history held for three months and permitted by users upon install,” Trend Micro wrote. “Third, we believe we identified a core issue which is humbly the result of the use of common code libraries. We have learned that browser collection functionality was designed in common across a few of our applications and then deployed the same way for both security-oriented as well as the non-security oriented apps such as the ones in discussion. This has been corrected.”

It is unclear why Open Any Files was uploading data to Trend Micro servers or if Trend Micro was the only company with access to the data uploaded by any of the Trend Micro apps.

Trend Micro did not respond to questions at the time of this post.

Apple’s responsibility in the Mac App Store

Despite being a central figure in the story of the Trend Micro apps being removed from the Mac App Store, the one company that has kept quiet has been Apple. Apple has not made a public statement and did not respond to requests for comment at the time of this post.

Apple claims, “The safest place to download apps for your Mac is the Mac App Store. Apple reviews each app before it’s accepted by the store, and if there’s ever a problem with an app, Apple can quickly remove it from the store.” But, Wardle said “it’s questionable whether these statements actually hold true,” given the number of apps found to be stealing data and Wardle pointed out that the Mac App Store has known issues with fake reviews propping up bad apps.

Stefan Esser, CEO of Antid0te UG, a security audit firm based in Cologne, Germany, also criticized Apple’s response to the claims apps in its store were stealing data.

“The fact that Apple was informed about this weeks ago and [chose] to ignore and that they finally reacted after bad press like two days before their announcement of new products for you to buy is for sure just coincidence,” Esser wrote on Twitter.

And Reed said it’s best to not trust certain apps in the Mac App Store.

New report shows how teachers use Microsoft Forms to drive improvement in learning outcomes |

Microsoft Education has undertaken a new study with researchers at Digital Promise to investigate how teachers around the world are using Microsoft Forms to drive learning. Providing feedback to students on their learning progress is one of the most powerful pedagogical approaches and education research has repeatedly shown that formative feedback has a tremendous impact on learning outcomes.

In this study, we found that teachers use Microsoft Forms not only for formative assessment, but for many other pedagogical activities. Teachers value the ease of use and clear reporting of Microsoft Forms.

“I actually say to teachers, ‘I think Forms is the most underrated piece of software in the suite because of the time that it saves you in terms of data-driven outcomes and the data collection that goes on with schools now.’”  

– Instructional Technology Coach

We are delighted to share this new report, which highlights the variety of creative ways teachers are using Forms.

Teachers are using Microsoft Forms in pedagogically substantive ways to improve student outcomes:

  • Formative Assessment
  • Differentiating Instruction
  • Peer Collaboration (students creating their own Forms in groups)
  • Social and Emotional Learning (see this teacher’s video on how she leverages Forms for SEL)
  • Increasing Student Engagement

Teachers also used Microsoft Forms for professional learning and to increase their efficiency with administrative and routine teaching tasks, such as:

  • Communicating with Parents
  • Professional Development through Reflective Practice
  • Increasing Teaching Efficiency by incorporating lunch choices, applications, and locker assignments into Forms

We also explored some of the best practices school and education-system leaders are using to grow adoption and use of Microsoft Forms. Some implementation strategies to get teachers to use Forms:

  • The most essential strategy is simply making teachers aware that Microsoft Forms is available and how it can be used. Follow the Quick Start guide to try out Microsoft Forms.
  • Training on how to use Forms is the second step and most coaches believed this training should be undertaken on its own (not as part of training on other apps). Check out Microsoft’s own training course, Microsoft Forms: Creating Authentic Assessments.

Coaches used the following strategies:

  • Using a Form with teachers directly to show its simplicity of use and to get them familiar with the tool
  • Understanding their teacher audiences and designing training for those audiences (e.g. ‘savvy explorer’ or ‘cautious adopter’)
  • Describing the time-saving element of Microsoft Forms use, especially enabling teachers to give students instant feedback; and how Microsoft Forms enables data-driven approaches to pedagogy with the immediate capture of data to Microsoft Excel.

Forms is an ideal tool for helping teachers incorporate more data-driven approaches to understanding what is working in their teaching practices, because it makes the collection (and much of the analysis) of student-learning data automatic. Results from a mood survey, a math quiz, or an exit ticket Form, are instantly available to both students and teachers. Such data helps teachers to build stronger learning relationships with their students, because they know where each student is at in their learning progress.

“There was that magical moment when getting the data happened. Oh my gosh, we’re getting this data in Forms in real time and that was unheard of before. Now within a matter of minutes I know where my students stand on the concepts that we’re going to cover that day.”

– 3rd Grade Teacher

Getting Started with Microsoft Forms

Microsoft Forms is an online quiz and survey application included with Microsoft Office 365. Forms was designed using direct feedback from educators looking for a simple way to formatively assess student learning and monitor learning progress on an ongoing basis.

Forms is part of the Office 365 suite of tools. If your school already has Office 365, you can log in at www.office.com and begin using Forms as one of the many apps included in the suite. Teachers and students can also Download Office 365 for free using a valid school email address. The resources below will help you get started on your journey to using Microsoft Forms.

BGP hijacking attacks target payment systems

Researchers discovered BGP hijacking attacks targeting payment processing systems and using new tricks to maximize the attackers hold on DNS servers.

Doug Madory, director of internet analysis at Oracle Dyn, previously saw border gateway protocol (BGP) hijacking attacks in April 2018 and has seen them continue through July. The first attack targeted an Amazon DNS server in order to lure victims to a malicious site and steal cryptocurrency, but more recent attacks targeted a wider range of U.S. payment services.

“As in the Amazon case, these more recent BGP hijacks enabled imposter DNS servers to return forged DNS responses, misdirecting unsuspecting users to malicious sites.  By using long TTL values in the forged responses, recursive DNS servers held these bogus DNS entries in their caches long after the BGP hijack had disappeared — maximizing the duration of the attack,” Madory wrote in a blog post. “The normal TTL for the targeted domains was 10 minutes (600 seconds).  By configuring a very long TTL, the forged record could persist in the DNS caching layer for an extended period of time, long after the BGP hijack had stopped.”

Madory detailed attacks on telecom companies in Indonesia and Malaysia as well as BGP hijacking attacks on U.S. credit card and payment processing services, the latter of which lasted anywhere from a few minutes to almost three hours. While the payment services attacks featured similar techniques to the Amazon DNS server attack, it’s unclear if the same threat actors are behind them.

Justin Jett, director of audit and compliance for Plixer, said BGP hijacking attacks are “extremely dangerous because they don’t require the attacker to break into the machines of those they want to steal from.”

“Instead, they poison the DNS cache at the resolver level, which can then be used to deceive the users. When a DNS resolver’s cache is poisoned with invalid information, it can take a long time post-attacked to clear the problem. This is because of how DNS TTL works,” Jett wrote via email. “As Oracle Dyn mentioned, the TTL of the forged response was set to about five days. This means that once the response has been cached, it will take about five days before it will even check for the updated record, and therefore is how long the problem will remain, even once the BGP hijack has been resolved.”

Madory was not optimistic about what these BGP hijacking attacks might portend because of how fundamental BGP is to the structure of the internet.

“If previous hijacks were shots across the bow, these incidents show the internet infrastructure is now taking direct hits,” Madory wrote. “Unfortunately, there is no reason not to expect to see more of these types of attacks against the internet.”

Matt Chiodi, vice president of cloud security at RedLock was equally as worried and warned that these BGP hijacking attacks should be taken as a warning.

“BGP and DNS are the silent warriors of the internet and these attacks are extremely serious because nearly all other internet services assume they are secure. Billions of users rely on these mostly invisible services to accomplish everything from Facebook to banking,” Chiodi wrote via email. “Unfortunately, mitigating BGP and DNS-based attacks is extremely difficult given the trust-based nature of both systems.”

NetSpectre is a remote side-channel attack, but a slow one

Researchers developed a new proof-of-concept attack on Spectre variant 1 that can be performed remotely, but despite the novel aspects of the exploit, experts questioned the real-world impact.

Michael Schwarz, Moritz Lipp, Martin Schwarzl and Daniel Gruss, researchers at the Graz University of Technology in Austria, dubbed their attack “NetSpectre” and claim it is the first remote exploit against Spectre v1 and requires “no attacker-controlled code on the target device.”

“Systems containing the required Spectre gadgets in an exposed network interface or API can be attacked with our generic remote Spectre attack, allowing [it] to read arbitrary memory over the network,” the researchers wrote in their paper. “The attacker only sends a series of crafted requests to the victim and measures the response time to leak a secret value from the victim’s memory.”

Gruss wrote on Twitter that Intel was given ample time to respond to the team’s disclosure of NetSpectre.

Gruss went on to criticize Intel for not designating a new Common Vulnerabilities and Exposures (CVE) number for NetSpectre, but an Intel statement explained the reason for this was because the fix is the same as Spectre v1.

“NetSpectre is an application of Bounds Check Bypass (CVE-2017-5753) and is mitigated in the same manner — through code inspection and modification of software to ensure a speculation-stopping barrier is in place where appropriate,” an Intel spokesperson wrote via email. “We provide guidance for developers in our whitepaper, ‘Analyzing Potential Bounds Check Bypass Vulnerabilities,’ which has been updated to incorporate this method. We are thankful to Michael Schwarz, Daniel Gruss, Martin Schwarzl, Moritz Lipp and Stefan Mangard of Graz University of Technology for reporting their research.”

Jake Williams, founder and CEO of Rendition Infosec, agreed with Intel’s assessment and wrote by Twitter direct message that “it makes sense that this wouldn’t get a new CVE. It’s not a new vulnerability; it’s just exploiting an existing vulnerability in a new way.”

The speed of NetSpectre

Part of the research that caught the eye of experts was the detail that when exfiltrating memory, “this NetSpectre variant is able to leak 15 bits per hour from a vulnerable target system.”

Kevin Beaumont, a security architect based in the U.K., explained on Twitter what this rate of exfiltration means.

Williams agreed and said that although the NetSpectre attack is “dangerous and interesting,” it is “not worth freaking out about.”

“The amount of traffic required to leak meaningful amounts of data is significant and likely to be noticed,” Williams wrote. “I don’t think attacks like this will get significantly faster. Honestly, the attack could leak 10 to 100 times faster and still be relatively insignificant. Further, when you are calling an API remotely and others call the same API, they’ll impact timing, reducing the reliability of the exploit.”

Gruss wrote by Twitter direct message that since an attacker can use NetSpectre to choose an arbitrary address in memory to read, the impact of the speed of the attack depends on the use case.

“Remotely breaking ASLR (address space layout randomization) within a few hours is quite nice and very practical,” Gruss wrote, adding that “leaking the entire memory is of course completely unrealistic, but this is also not what any attacker would want to do.”