Tag Archives: Researchers

Italian company implicated in GuLoader malware attacks

While tracking a new security threat known as “GuLoader,” researchers at Check Point Software Technologies discovered more than just a malicious software installer.

GuLoader has been on the radar of a number of security vendors this year. According to a new report this week, Check Point Research said the installer or network dropper “has been very actively distributed in 2020 and is used to deliver malware with the help of cloud services such as Google Drive,” with hundreds of attacks using GuLoader being observed every day.

An investigation into GuLoader led the security vendor to the website of an Italian security software company which offered a product called CloudEye. While their operations and clearnet website appeared to be legitimate, providing software to protect Windows applications, they actually sell a product comparable to GuLoader and undetectable to antivirus software, according to Check Point.

In its report titled “GuLoader? No, CloudEye,” Check Point estimates the Italian company makes a monthly income of $500,000 from sales to cybercriminals. And, according to Maya Levine, Check Point’s technical marketing engineer for cloud security, it’s been a legally registered Italian company operating a publicly available website for years. This form of sales is unusual because attackers commonly do their business on the dark web, Levine said. Though they aren’t hiding on the dark web, finding CloudEye wasn’t a simple process.

“While monitoring GuLoader we repeatedly encountered samples that our systems detected as GuLoader, but they didn’t have the URL in it for downloading the payload,” Levine said. “When we looked at it manually and analyzed it, we found the payload is embedded in the sample itself. It was slightly different than GuLoader — it was something called DarkEye.”

DarkEye
The Italian company offering CloudEye previously sold the product as DarkEye Protector, which Check Point researchers connected to the GuLoader malware dropper.

After a search for DarkEye on the dark web, Check Point researchers found multiple advertisements that described it as a cryptor that could be used with a variety of malware that would make it fully undetectable for antivirus. A closer look at who posted the advertisements led to a website whose URL was mentioned in the ads.

[CloudEye] pretended to be legitimate and aboveboard, but they are selling basically the same thing as GuLoader.
Maya LevineTechnical marketing engineer for cloud security, Check Point Software Technologies

“It was connected to DarkEye but it was selling a product they called CloudEye. They pretended to be legitimate and aboveboard, but they are selling basically the same thing as GuLoader,” Levine said. “When we looked at the sample from CloudEye and the same we had for GuLoader, we found it almost identical. The only difference came from code randomization techniques but the actual important information in the code, the import functions, were all identical.” 

Check Point’s report cited CloudEye’s website, which states “DarkEye evolved into CloudEye! Next generation of Windows executables’ protection!” Earlier versions of the website on the Internet Archive’s Wayback Machine show the company was previously called DarkEye.

Not only did Check Point find CloudEye was offering a commodity downloader strikingly similar to GuLoader, it also provided video tutorials on its website of how to use it.

“Basically what they’re selling is the ability to bypass cloud drive antivirus checking because Google and all those [cloud services] don’t allow you to upload malware. What they’re selling uses techniques to avoid being detected by a lot of these security products,” Levine said.

CloudEye and cloud-based attacks

A new trend is what jumpstarted Check Point’s inquiry into GuLoader initially. Earlier this year, the security vendor determined that the delivery of malware through cloud drives is one of the fastest-growing trends of 2020. Research into the trend led to the discovery of GuLoader, which has become very prevalent in the threat landscape, Levine said. According to Levine, up to 25% of all packed malware samples are GuLoader.

“We looked at how these attacks usually work. Usually there’s a dropper that’s sent in the form of an email, spam emails, that have an embedded attachment. An ISO file has the malicious executable then that dropper will download the malicious payload from a well-known cloud service and execute it,” Levine said.

Email security vendor Proofpoint has also been tracking GuLoader. Researchers first observed it being used in December 2019 to deliver Parallax RAT and began looking into the malware in conjunction with that research. Sherrod DeGrippo, senior director of threat research and detection at Proofpoint, says GuLoader is interesting for three reasons.

“First, it’s written in Visual Basic 6.0, a version of Visual Basic Microsoft stopped supporting in 2008. Second, we found that while it was new, it was being adopted very quickly by multiple threat actors. Third, it stores its encrypted payloads on Google Drive or Microsoft OneDrive, showing that threat actors are leveraging the cloud just like businesses are,” DeGrippo said.

One reason attackers are turning to this method of malware delivery is the fact that it can fool a lot of humans and a lot of firewalls, Levine said. 

“If humans look at the network activity and all they see is Google Drive, they’ll probably dismiss that activity as legitimate even though it’s contacting Google Drive to download something malicious,” Levine said. “Same thing with firewalls, because the antivirus signatures aren’t always distributed on a daily basis; sometimes it’s a weekly basis so there’s a lag these kind of attacks could take advantage of.”

Evasion and disguises

Hiding under a legitimate front isn’t the only sneaky part of the CloudEye dropper.

“There’s a spam email with an embedded attachment; usually it’s an ISO file with the malicious executable, and then they disguise the payload as a picture. The key here is that it’s encrypted while it’s in cloud storage; it only gets decrypted on the victim’s machine,” Levine said. “And what that does is make it so the cloud host can’t really kick off the malicious payload because it’s decrypted while it’s on their servers, so they don’t really know what it is.”

The image file may appear as a jury summons, for example. Once it’s opened and the dropper is activated, it fetches the malware payload and only stores it in memory, Levine said.

While there is some technology like sandboxing that will detect these malicious droppers, Levine said CloudEye has been a common denominator in thousands of attacks over the past year.

While this instance of threat actors standing up a “fake” company is not very common, Check Point’s head of cyber research Yaniv Balmas says it is not the first case in which a cybercrime tool was sold publicly on the internet.

“In most cases it is very difficult to link the tool to a specific company, or to a specific person. In this case however it seems the amount of connections we found linking this site to the ‘real world’ were significant. This might mean the owners are not concerned from being exposed, as they probably believe the ‘legitimacy cover’ is providing them with the required legal umbrella allowing them to continue their actions even if it will be brought to the public eye,” Balmas said. “The sad fact is they may be right.”

SearchSecurity contacted CloudEye for comment but the company has not responded. Attempts by Check Point to reach CloudEye were also unsuccessful.

CloudEye’s website was updated Wednesday with a statement from Sebastiano Dragna and Ivano Mancini, who were named in the Check Point report:

“We learned from the press that unsuspecting users would use our platform to perpetrate abuses of all kinds. Our protection software was created and developed to protect intellectual works from the abuse of hackers and their affiliates, not to sow malware around the network. Although we are not sure that what is reported by the media is true, we believe it appropriate to suspend our service indefinitely. We are two young entrepreneurs, passionate about IT security and our goal is to enrich the scientific community with our services, not to allow a distorted use of our intellectual work. We thank all our customers, who have legally used our services since 2015. Customers will be reimbursed for purchased and unused license days. For more information contact us by e-mail [email protected], you will receive an answer within 24 hours.”

Go to Original Article
Author:

New ‘Thanos’ ransomware weaponizes RIPlace evasion technique

Threat researchers at Recorded Future discovered a new ransomware-as-a-service tool, dubbed “Thanos,” that is the first to utilize the evasion technique known as RIPlace.

Thanos was put on sale as a RaaS tool “with the ability to generate new Thanos ransomware clients based on 43 different configuration options,” according to the report published Wednesday by Recorded Future’s Insikt Group.

Notably, Thanos is the first ransomware family to advertise its optional utilization of RIPlace, a technique introduced through a proof-of-concept (PoC) exploit in November 2019 by security company Nyotron. At its release, RIPlace bypassed most existing ransomware defense mechanisms, including antivirus and EDR products. But despite this, the evasion wasn’t considered a vulnerability because it “had not actually been observed in ransomware at the time of writing,” Recorded Future’s report said.

As reported by BleepingComputer last November, only Kaspersky Lab and Carbon Black modified their software to defend against the technique. But since January, Recorded Future said, “Insikt Group has observed members of dark web and underground forums implementing the RIPlace technique.”

According to its report on RIPlace, Nyotron discovered that file replacement actions using the Rename function in Windows could be abused by calling DefineDosDevice, which is a legacy function that creates a symbolic link or “symlink.”

Thanos RIPlace
Recorded Future shows how the RIPlace proof-of-concept exploit was adopted by a new ransomware-as-a-service tool known as Thanos.

Lindsay Kaye, director of operational outcomes for Recorded Future’s Insikt Group, told SearchSecurity that threat actors can use the MS-DOS device name to replace an original file with an encrypted version of that file without altering most antivirus programs.

“As part of the file rename, it called a function that is part of the Windows API that creates a symlink from the file to an arbitrary device. When the rename call then happens, the callback using this passed-in device path returns an error; however, the rename of the file succeeds,” Kaye said. “But if the AV detection doesn’t handle the callback correctly, it would miss ransomware using this technique.”

Insikt Group researchers first discovered the new Thanos ransomware family in January on an exploit forum. According to the Recorded Future report, Thanos was developed by a threat actor known as “Nosophoros” and has code and functions that are similar to another ransomware variant known as Hakbit.

While Nyotron’s PoC was eventually weaponized by the Thanos threat actors, Kaye was in favor of the vendor’s decision to publicly release RIPlace last year.

“I think at the time, publicizing it was great in that now antivirus companies can say great, now let’s make sure it’s something we’re detecting because if someone’s saying here’s a new technique, threat actors are going to take advantage of it so now it’s something that’s not going to be found out after people are victimized. It’s out in the open and companies can be aware of it,” Kaye said.

Recorded Future’s report noted that Thanos appears to have gained traction within the threat actor community and will continue to be deployed and weaponized by both individual cybercriminals and collectives through its RaaS affiliate program.

Go to Original Article
Author:

With new Garage project Trove, people can contribute photos to help developers build AI models – Microsoft Garage

Every day, developers and researchers are finding creative ways to leverage AI to augment human intelligence and solve tough problems. Whether they’re training a computer vision model that can spot endangered snow leopards or help us do our business expenses more easily when we scan pictures of receipts, they need a lot of quality pictures to do it. Developers usually crowd source these large batches of pictures by enlisting the help of gig workers to submit photos, but often, these calls for photos feel like a black box. Participants have little insight into why they’re submitting a photo and can feel like their time was lost when their submissions are rejected without explanation. At the same time, developers can find that these sourcing projects take a long time to complete due to lower quality and less diverse inputs.

We’re excited to announce that Trove, a Microsoft Garage project, is exploring a solution that can enhance the experience and agency for both parties. Trove is a marketplace app that allows people to contribute photos to AI projects that developers can then use to train machine learning models. Interested parties can request an invite to join the experiment as a contributor or developer. Trove is currently accepting a small number of participants in the United States on both Android and iOS.

A marketplace that puts transparency and choice first

Today, most data collection is passive, with many people unaware that their data is being collected or not making a real-time, active choice to contribute their information. And even those who contribute more directly to model training projects are often not provided the greater context and purpose of the project; there’s little to no feedback loop to correct and align data submissions to better fit the needs of project.

For people who rely on this data gig work as an important source of income, this rejection experience can leave them feeling frustrated and without any agency to contribute better submissions and a higher return on their time investment. With machine learning being a critical step in unlocking advancements from speech to image recognition, there’s an important opportunity to increase the quality of data, while making sure that contributors have the clarity and choice they need to participate in the process.

The Trove team has found a way to overcome these tough tradeoffs in a marketplace solution that emphasizes greater communication, context, and feedback between developers and project participants. “There’s a better way we can do this. You can have the transparency of how your data is being used and actually want to opt in to contribute to these projects and advance science and AI,” shares Krishnan Raghupathi, the Senior Program Manager for Trove. “We’d love to see this become a community where people are a key part of the project.”

To read more about key features and how Trove works for developers and contributors, check it out on the Garage Workbench.

Aspiring to higher quality data and increased contributor agency

The team behind Trove was originally inspired by thought leaders exploring how we can embrace the need for a large volume of data to enable AI advancements, while providing more agency to contributors and recognizing the value of their data. “We wanted to explore these concepts through something concrete,” shared Christian Liensberger, the lead Principal Program Manager on the project. “We decided to form an incubation team and build something that could show how things could be different.”

In creating Trove, the incubation team had to think through principles that would guide them as they brought such an experience to life. They believe that the best framework to produce the higher quality data needed to train these AI models involves connecting content creators to AI developers more directly. Trove was built with a design and approach that focuses on four core principles:

  • Transparency See all the projects available, details about who is posting them, and how your data will be used
  • Control Decide which projects you want to contribute to, and control when and how much you contribute
  • Enrichment Learn directly from AI developers how your contributions are valuable, and see how your participation will advance AI projects
  • Connection Communicate with AI developers to stay informed on projects you contributed to

“I love working on this project, it’s a continuous shift between the user need for privacy and control, and professionals’ need for data to innovate and create new products,” said Devis Lucato, Principal Engineering Manager for Trove. “We’re pushing the boundaries of all the technologies that we touch, exploring new features and challenging decisions determined by the status quo.”

Before releasing this experiment to external users, the team piloted Trove with Microsoft employees from across the US. While Trove is still in an experimental phase, the team is excited for even more feedback. “Our solution is still a bit rough around the edges, but we want to hear from the community about what we should focus on next,” shares Christian. Trinh Duong, the Marketing Manager on the project added, “My favorite part about working on this has been how much the app incorporates users into the experience. We want to invite our users to reach out and join us as true participants in the creation of this concept.”

The team is welcoming feedback from experiment participants here, and is enthusiastic for the input of users who are as passionate about the principles of transparency, control, enrichment, and connection as they are.

Request an invite and share your feedback

Trove will be able to try in the United States upon request while room in the experiment is still available. Request an invite to join the experiment, or request to add an ML project to the experiment.

Go to Original Article
Author: Microsoft News Center

How a synthetic data approach is helping COVID-19 research

As medical researchers around the world race to find answers to the COVID-19 pandemic, they need to gather as much clinical data as possible for analysis.

A key challenge many researchers face with clinical data is privacy and the mandate to protect confidential patient information. One way to overcome that privacy challenge is by using synthetic data, an approach that creates data that is not linked to personally identifiable information. Rather than encrypting or attempting to anonymize data to protect privacy, synthetic data represents a different approach that can be useful for medical researchers.

With synthetic data there are no real people, rather the data is a synthetic copy that is statistically comparable, but entirely composed of fictional patients, explained Ziv Ofek, founder and CEO of health IT vendor MDClone, based in Beer Sheba, Israel.

Other popular methods of protecting patient privacy, such as anonymization and encryption, aim to balance patient privacy and data utility. However, a privacy risk still remains because embedded within the data, even after diligent attempts to protect privacy, are real people, Ofek argued.

“There are no real people embedded within the synthetic data,” Ofek said. “Instead, the data is a statistical representation of the original and the risk of reidentification is no longer relevant, even though it may appear as real people and can be analyzed as if it were and yielding the same conclusions.”

Synthetic Data Engine from MDClone
MDClone Synthetic Data Engine creates anonymous data statistically identical to the original.

Synthetic data in the real world

MDClone’s synthetic data technology is being used by Sheba Medical Center in Tel Aviv as part of its COVID-19 research.

Synthetic data provides an opportunity to get quick answers to data-related questions … [and] allows users to work on the data in their own environment, something we do not allow with real data.
Eyal Zimlichman, M.D.Deputy director general, Sheba Medical Center

The MDClone system is critical to his organization’s data efforts to gain more insights into COVID-19, the disease caused by the novel coronavirus, said Eyal Zimlichman, M.D., deputy director general, chief medical officer and chief innovation officer at Sheba Medical.

By regulation, synthetic data is not considered patient data and therefore is not subject to the IRB process. As opposed to real patient data, Ofek noted that synthetic data can be accessed freely by researchers, so long as the institution agrees to provide access.

“Synthetic data provides an opportunity to get quick answers to data-related questions without the need for an IRB approval,”Zimlichman said. “It also allows users to work on the data in their own environment, something we do not allow with real data.”

Zimlichman added that data science groups both within and outside the hospital are using the MDClone system to help predict COVID-19 patient outcomes, as well as to aid in determining a course of action for therapy.

Synthetic data accelerates time to insight

The MDClone platform includes a data engine for collecting and organizing patient data, the discovery studio for analysis and the Synthetic Data Engine for creating data. The vendor on April 14 released the MDClone Pandemic Response Package, which includes a predefined set of visualizations and analyses that are COVID-19-specific. The engine enables clients and networks to ask questions of COVID-19-related data and generate meaningful analysis, including cohort and population-level insights.

In the event a client wants to use their data to share, compare and collaborate with others, they can convert their original data into a synthetic copy for shared review and insight development.

“A synthetic collaboration model allows for that conversation to take place with data flows and analysis performed across both systems without patient privacy and security risks,” Ofek said.

Ofek added that the synthetic model and platform access capability enables clients to invite research and collaboration partners into their data environment rather than simply sharing files on demand. With MDClone, the client’s research and collaboration partners are able to log in to the MDClone data lake and then get access to the data and exploration tools with synthetic output.

“In the context of the pandemic, organizations leveraging the platform can offer partners unfettered synthetic access to accelerate exploration into new avenues for treatment,” Ofek said. “Idea generation and data reviews that enable real-world analysis is our pathway to finding and broadcasting the best healthcare professionals can offer as we combat the disease.”

Go to Original Article
Author:

New Azure HPC and partner offerings at Supercomputing 19

For more than three decades, the researchers and practitioners that make up the high-performance computing (HPC) community will come together for their annual event. More than ten thousand strong in attendance, the global community will converge on Denver, Colorado to advance the state-of-the-art in HPC. The theme for Supercomputing ‘19 is “HPC is now” – a theme that resonates strongly with the Azure HPC team given the breakthrough capabilities we’ve been working to deliver to customers.

Azure is upending preconceptions of what the cloud can do for HPC by delivering supercomputer-grade technologies, innovative services, and performance that rivals or exceeds some of the most optimized on-premises deployments. We’re working to ensure Azure is paving the way for a new era of innovation, research, and productivity.

At the show, we’ll showcase Azure HPC and partner solutions, benchmarking white papers and customer case studies – here’s an overview of what we’re delivering.

  • Massive Scale MPI – Solve problems at the limits of your imagination, not the limits of other public cloud’s commodity network. Azure supports your tightly coupled workloads up to 80,000 cores per job featuring the latest HPC-grade CPUs and ultra-low latency InfiniBand hDR networking.
  • Accelerated Compute – Choose from the latest GPUs, field programmable gate array (FPGAs), and now IPUs for maximum performance and flexibility across your HPC, AI, and visualization workloads.
  • Apps and Services – Leverage advanced software and storage for every scenario: from hybrid cloud to cloud migration, from POCs to production, from optimized persistent deployments to agile environment reconfiguration. Azure HPC software has you covered.

Azure HPC unveils new offerings

  • The preview of new second gen AMD EPYC based HBv2 Azure Virtual Machines for HPC – HBv2 virtual machines (VMs) deliver supercomputer-class performance, message passing interface (MPI) scalability, and cost efficiency for a variety of real-world HPC workloads. HBv2 VMs feature 120 non-hyperthreaded CPU cores from the new AMD EPYC™ 7742 processor, up to 45 percent more memory bandwidth than x86 alternatives, and up to 4 teraFLOPS of double-precision compute. Leveraging the cloud’s first deployment of 200 Gigabit HDR InfiniBand from Mellanox, HBv2 VMs support up to 80,000 cores per MPI job to deliver workload scalability and performance that rivals some of the world’s largest and most powerful bare metal supercomputers. HBv2 is not just one of the most powerful HPC servers Azure has ever made, but also one of the most affordable. HBv2 VMs are available now.
  • The preview of new NVv4 Azure Virtual Machines for virtual desktops – NVv4 VMs enhance Azure’s portfolio of Windows Virtual Desktops with the introduction of the new AMD EPYC™ 7742 processor and the AMD RADEON INSTINCT™ MI25 GPUs. NVv4 gives customers more choice and flexibility by offering partitionable GPUs. Customers can select a virtual GPU size that is right for their workloads and price points, with as little as 2GB of dedicated GPU frame buffer for an entry-level desktop in the cloud, up to an entire MI25 GPU with 16GB of HBM2 memory to provide powerful workstation-class experience. NVv4 VMs are available now in preview.
  • The preview NDv2 Azure GPU Virtual Machines – NDv2 VMs are the latest, fastest, and most powerful addition to Azure GPU family, and are designed specifically for the most demanding distributed HPC, artificial intelligence (AI), and machine learning (ML) workloads. These VMs feature 8 NVIDIA Tesla V100 NVLink interconnected GPUs each with 32 GB of HBM2 memory, 40 non-hyperthreaded cores from the Intel Xeon Platinum 8168 processor, and 672 GiB of system memory. NDv2-series virtual machines also now feature 100 Gigabit EDR InfiniBand from Mellanox with support for standard OFED drivers and all MPI types and versions. NDv2-series virtual machines are ready for the most demanding machine learning models and distributed AI training workloads with out-of-box NCCL2 support for InfiniBand- allowing for easy scale-up to supercomputer-sized clusters that can run workloads utilizing CUDA, including popular ML tools and  frameworks. NDv2 VMs are available now in preview.
  • Azure HPC Cache now available – When it comes to file performance, the new Azure HPC Cache delivers flexibility and scale. Use this service right from the Azure Portal to connect high-performance computing workloads to on-premises network-attached storage or run Azure Blob as the portable operating system interface.
  • The preview of new NDv3 Azure Virtual Machines for AI –The preview of NDv3 VMs are Azure’s first offering featuring the Graphcore IPU, designed from the ground up for AI training and inference workloads. The IPU novel architecture enables high-throughput processing of neural networks even at small batch sizes, which accelerates training convergence and enables short inference latency. With the launch of NDv3, Azure is bringing the latest in AI silicon innovation to the public cloud and giving customers additional choice in how they develop and run their massive scale AI training and inference workloads. NDv3 VMs feature 16 IPU chips, each with over 1200 cores and over 7000 independent threads and a large 300MB on-chip memory that delivers up to 45 TB/s of memory bandwidth. The 8 Graphcore accelerator cards are connected through a high-bandwidth, low-latency interconnect enabling large models to be trained in a model-parallel (including pipelining) or data parallel way. An NDv3 VM also includes 40 cores of CPU, backed by Intel Xeon Platinum 8168 processors and 768 GB of memory. Customers can develop applications for IPU technology using Graphcore’s Poplar® software development toolkit, and leverage the IPU-compatible versions of popular machine learning frameworks. NDv3 VMs are now available in preview.
  • NP Azure Virtual Machines for HPC coming soon – Our Alveo U250 FPGA-Accelerated VMs offer from 1-to-4 Xilinx U250 FPGA devices as an Azure VM- backed by powerful Xeon Platinum CPU cores, and fast NVMe-based storage. The NP series will enable true lift-and-shift and single-target development of FPGA applications for a general purpose cloud. Based on a board and software ecosystem customers can buy today, RTL and high-level language designs targeted at Xilinx’s U250 card and SDAccel 2019.1 runtime will run on Azure VMs just as they do on-premises and on the edge, enabling the bleeding edge of accelerator development to harness the power of the cloud without additional development costs.

  • Azure CycleCloud 7.9 Update We are excited to announce the release of Azure CycleCloud 7.9. Version 7.9 focuses on improved operational clarity and control, in particular for large MPI workloads on Azure’s unique InfiniBand interconnected infrastructure. Among many other improvements, this release includes:

    • Improved error detection and reporting user interface (UI) that greatly simplify diagnosing VM issues.

    • ​Node time-to-live capabilities via a “Keep-alive” function, allowing users to build and debug MPI applications that are not affected by autoscaling policies.

    • VM placement group management through the UI that provides users direct control into node topology for latency sensitive applications.

    • Support for Ephemeral OS disks, which improve virtual machines and virtual machines scale sets start-up performance and cost.

  • Microsoft HPC Pack 2016, Update 3 – released in August, Update 3 includes significant performance and reliability improvements, support for Windows Server 2019 in Azure, a new VM extension for deploying Azure IaaS Windows nodes, and many other features, fixes, and improvements.

In all of our new offerings and alongside our partners, Azure HPC aims to consistently offer the latest in capabilities for HPC oriented use cases. Together with our partners Intel, AMD, Mellanox, NVIDIA, Graphcore, Xilinx, and many more, we look forward to seeing you next week in Denver!

Go to Original Article
Author: Microsoft News Center

Check Point: Qualcomm TrustZone flaws could be ‘game over’

Security researchers found vulnerabilities in the Qualcomm TrustZone secure element extension that could allow attackers to steal the most sensitive data stored on mobile devices.

TrustZone implements architectural security extensions on ARM processors that can be integrated into the bootloader, radio, Android system image and a trusted execution environment (TEE) in mobile devices. Slava Makkaveev, security researcher at Check Point Software Technologies, discovered the issues in the Qualcomm TrustZone implementation often used by major Android manufacturers.

“TEE code is highly critical to bugs because it protects the safety of critical data and has high execution permissions. A vulnerability in a component of TEE may lead to leakage of protected data, device rooting, bootloader unlocking, execution of undetectable APT and more. Therefore, a Normal world OS restricts access to TEE components to a minimal set of processes,” Makkaveev wrote in his analysis. “Examples of privileged OS components are DRM service, media service and keystore. However, this does not reduce researchers’ attention to the TrustZone.”

Makkaveev said the Qualcomm TrustZone components can be found in popular Android devices from Samsung, Google, LG and OnePlus. He used fuzzing tools to discover the vulnerabilities and exploited them in order to install a trusted app in a normal environment.

Check Point claimed the flaws affect all versions of Android up to the most recent Android 10; however, Makkaveev mentions testing on only a Nexus 6 running Android 7.1, an LG G4 running Android 6 and Moto G4/G4 Plus running an unknown version of Android.

Samsung, Motorola, LG and Qualcomm did not respond to requests for comment at the time of this post. Google responded but did not have information readily available as to whether more recent Google Pixel devices are at risk.

Liviu Arsene, global cybersecurity researcher at antimalware firm Bitdefender, based in Romania, told SearchSecurity this research is important because “high-complexity and high-reward vulnerabilities [like this] can potentially offer untethered access to critical assets and data on the device.”

“When a vulnerability in the software that sits between the hardware and the operating system running on top of it is found, successful exploitation can have serious security and privacy implications,” Arsene said. “Not because attackers could potentially access critical and sensitive data, but because attackers can compromise the security of the device, while being invisible to the victim. Depending on how the vulnerability is triggered, weaponized attackers might successfully exfiltrate sensitive data such as passwords, financial information, or even planting additional software on the device.”

Ekram Ahmed, head of public relations at Check Point, told SearchSecurity, “it’s only a matter of time before we find more vulnerabilities.”

Once someone gains access into Trust Zone, it’s game over. They can get unprecedented access to our credit cards, biometric data, keys, passwords.
Ekram AhmedHead of public relations, Check Point

“Once someone gains access into Trust Zone, it’s game over. They can get unprecedented access to our credit cards, biometric data, keys, passwords,” Ahmed said. “It wouldn’t be too difficult for a medium-skilled cyber actor to exploit. What is difficult is knowing exactly who is affected. The vulnerability is a deeper infrastructure issue.”

Arsene said he wouldn’t expect to see these Qualcomm TrustZone flaws exploited “en masse in the wild.”

“While weaponizing the vulnerability may be possible, it’s likely that only a handful of users could potentially be impacted, possibly in highly targeted attacks,” Arsene said. “However, the difficulty of pulling off these attacks lies in how easily the vulnerability can be weaponized.”

Ahmed added that Check Point notified all potentially affected device manufacturers, but the company had “strange” interactions with Qualcomm leading up to the patch being released.

“We asked them to patch, and they only told us they patched a day before we published the blog, because the media was reaching out to them,” Ahmed said. “They went months without communicating a single word to us.”

Go to Original Article
Author:

ZombieLoad v2 disclosed, affects newest Intel chips

Security researchers disclosed a new version of the ZombieLoad attack and warned that Intel’s fixes for the original threat can by bypassed.

The original ZombieLoad attack — a speculative execution exploit that could allow attackers to steal sensitive data from Intel processors — was first announced May 14 as part of a set of microarchitectural data sampling (MDS) attacks that also included RIDL (Rogue In-Flight Data Load) and Fallout. According to the researchers, they first disclosed ZombieLoad v2 to Intel on April 23 with an update on May 10 to communicate that “the attacks work on Cascade Lake CPUs,” Intel’s newest line of processors. However, ZombieLoad v2 was kept under embargo until this week.

“We present a new variant of ZombieLoad that enables the attack on CPUs that include hardware mitigations against MDS in silicon. With Variant 2 (TAA), data can still be leaked on microarchitectures like Cascade Lake where other MDS attacks like RIDL or Fallout are not possible,” the researchers wrote on the ZombieLoad website. “Furthermore, we show that the software-based mitigations in combinations with microcode updates presented as countermeasures against MDS attacks are not sufficient.”

One of the ZombieLoad researchers, Moritz Lipp, PhD candidate in information security at the Graz University of Technology in Austria, told SearchSecurity  the problem with the patch for the initial MDS issues is that it “does not prevent the attack, just makes it harder. It just takes longer as the leakage rate is not that high.”

Lipp added that the team’s relationship with Intel has been improving over the past two years and the extended embargo was a direct result of ZombieLoad v2 affecting Cascade Lake processors.

In an update to the original ZombieLoad research paper, the researchers noted that the main advantage of variant two “is that it also works on machines with hardware fixes for Meltdown,” and noted that the attack requires “the Intel TSX instruction-set extension which is only available on selected CPUs since 2013,” including various Skylake, Kaby Lake, Coffee Lake, Broadwell and Cascade Lake processors.

Intel did not respond to questions regarding ZombieLoad v2 — which the company refers to as TSX Asynchronous Abort (TAA) —  or the original MDS patch, and instead directed SearchSecurity to the company’s November 2019 Intel Platform Update blog post. In that blog post, Jerry Bryant, director of communications for Intel Product Assurance and Security, admitted Intel’s MDS mitigations fell short.

“We believe that the mitigations for TAA and MDS substantively reduce the potential attack surface,” Bryant wrote. “Shortly before this disclosure, however, we confirmed the possibility that some amount of data could still be inferred through a side-channel using these techniques (for TAA, only if TSX is enabled) and will be addressed in future microcode updates.”

In an attached “deep dive,” Intel also admitted the ZombieLoad v2 attack “may expose data from either the current logical processor or from the sibling logical processor on processors with simultaneous multithreading.”

The researchers also noted that with the range of CPUs affected, the attack could be performed both on PCs as well as in the cloud.

“The attack can be mounted in virtualized environments like the cloud as well across hyperthreads, if two virtual machines are each running on one of them,” Lipp told SearchSecurity. “However, typically huge cloud providers don’t schedule virtual machines anymore.”

Chris Goettl, director of product management, security at Ivanti, told SearchSecurity that while the research is interesting, the risks of ZombieLoad are relatively low.

“In a cloud environment a vulnerability like this could allow an attacker to glean information across many companies, true, but we are talking about a needle in a field of haystacks,” Goettl said. “Threat actors have motives and they will drive toward their objectives in most cases as quickly and easily as they possibly can. There are a number of information disclosure vulnerabilities that are going to be far easier to exploit than ZombieLoad.”

Lipp confirmed that in order to ensure the leak of sensitive data an attacker would need to ensure “a victim loads specific data, for instance triggering code that loads passwords in order to authenticate a user, an attacker can leak that.”

Ultimately, Goettl said he would expect Intel to continue to be reactive with side-channel attacks like ZombieLoad until there is “a precipitating event where any of these exploits are used in a real-world attack scenario.”

“The incomplete MDS patch probably says a little about how much effort Intel is putting into resolving the vulnerabilities. They fixed exactly what they were shown was the issue, but didn’t look beyond to see if something more should be done or if that fix could also be circumvented,” Goettle said. “As long as speculative execution remains academic Intel’s approach will likely continue to be reactive rather than proactive.”

Go to Original Article
Author:

Microsoft + The Jackson Laboratory: Using AI to fight cancer

YouTube Video

Biomedical researchers are embracing artificial intelligence to accelerate the implementation of cancer treatments that target patients’ specific genomic profiles, a type of precision medicine that in some cases is more effective than traditional chemotherapy and has fewer side effects.

The potential for this new era of cancer treatment stems from advances in genome sequencing technology that enables researchers to more efficiently discover the specific genomic mutations that drive cancer, and an explosion of research on the development of new drugs that target those mutations.

To harness this potential, researchers at The Jackson Laboratory, an independent, nonprofit biomedical research institution also known as JAX and headquartered in Bar Harbor, Maine, developed a tool to help the global medical and scientific communities stay on top of the continuously growing volume of data generated by advances in genomic research.

The tool, called the Clinical Knowledgebase, or CKB, is a searchable database where subject matter experts store, sort and interpret complex genomic data to improve patient outcomes and share information about clinical trials and treatment options.

The challenge is to find the most relevant cancer-related information from the 4,000 or so biomedical research papers published each day, according to Susan Mockus, the associate director of clinical genomic market development with JAX’s genomic medicine institute in Farmington, Connecticut.

“Because there is so much data and so many complexities, without embracing and incorporating artificial intelligence and machine learning to help in the interpretation of the data, progress will be slow,” she said.

That’s why Mockus and her colleagues at JAX are collaborating with computer scientists working on Microsoft’s Project Hanover who are developing AI technology that enables machines to read complex medical and research documents and highlight the important information they contain.

While this machine reading technology is in the early stages of development, researchers have found they can make progress by narrowing the focus to specific areas such as clinical oncology, explained Peter Lee, corporate vice president of Microsoft Healthcare in Redmond, Washington.

“For something that really matters like cancer treatment where there are thousands of new research papers being published every day, we actually have a shot at having the machine read them all and help a board of cancer specialists answer questions about the latest research,” he said.

Peter Lee stands with arms crossed behind some plants
Peter Lee, corporate vice president of Microsoft Healthcare. Photo by Dan DeLong. 

Curating CKB

Mockus and her colleagues are using Microsoft’s machine reading technology to curate CKB, which stores structured information about genomic mutations that drive cancer, drugs that target cancer genes and the response of patients to those drugs.

One application of this knowledgebase allows oncologists to discover what, if any, matches exist between a patient’s known cancer-related genomic mutations and drugs that target them as they explore and weigh options for treatment, including enrollment in clinical trials for drugs in development.

This information is also useful to translational and clinical researchers, Mockus noted.

The bottleneck is filtering through the more than 4,000 papers published every day in biomedical journals to find the subset of about 200 related to cancer, read them and update CKB with the relevant information on the mutation, drug and patient response.

“What you want is some degree of intelligence incorporated into the system that can go out and not just be efficient, but also be effective and relevant in terms of how it can filter information. That is what Hanover has done,” said Auro Nair, executive vice president of JAX.

The core of Microsoft’s Project Hanover is the capability to comb through the thousands of documents published each day in the biomedical literature and flag and rank all that are potentially relevant to cancer researchers, highlighting, for example, information on gene, mutation, drug and patient response.

Human curators working on CKB are then free to focus on the flagged research papers, validating the accuracy of the highlighted information.

“Our goal is to make the human curators superpowered,” said Hoifung Poon, director of precision health natural language processing with Microsoft’s research organization in Redmond and the lead researcher on Project Hanover.

“With the machine reader, we are able to suggest that this might be a case where a paper is talking about a drug-gene mutation relation that you care about,” Poon explained. “The curator can look at this in context and, in a couple of minutes, say, ‘This is exactly what I want,’ or ‘This is incorrect.’”

Hoifung Poon sits on a yellow chair
Hoifung Poon , director of precision health natural language processing with Microsoft’s research organization, is leading the development of Project Hanover, a machine reading technology. Photo by Jonathan Banks. 

Self supervision

To be successful, Poon and his team need to train machine learning models in such a way that they catch all the potentially relevant information – ensure there are no gaps in content – and, at the same time, weed out irrelevant information sufficiently to make the curation process more efficient.

In traditional machine reading tasks such as finding information about celebrities in news stories, researchers tend to focus on relationships contained within a single sentence, such as a celebrity name and a new movie.

Since this type of information is widespread across news stories, researchers can skip instances that are more challenging such as when the name of the celebrity and movie are mentioned in separate paragraphs, or when the relationship involves more than two pieces of information.

“In biomedicine, you can’t do that because your latest finding may only appear in this single paper and if you skip it, it could be life or death for this patient,” explained Poon. “In this case, you have to tackle some of the hard linguistic challenges head on.”

Poon and his team are taking what they call a self-supervision approach to machine learning in which the model automatically annotates training examples from unlabeled text by leveraging prior knowledge in existing databases and ontologies.

For example, a National Cancer Institute initiative manually compiled information from the biomedical literature on how genes regulate each other but was unable to sustain the effort beyond two years. Poon’s team used the compiled knowledge to automatically label documents and train a machine reader to find new instances of gene regulation.

They took the same approach with public datasets on approved cancer drugs and drugs in clinical trials, among other sources.

This connect-the-dots approach creates a machine learned model that “rarely misses anything” and is precise enough “where we can potentially improve the curation efficiency by a lot,” said Poon.

Collaboration with JAX

The collaboration with JAX allows Poon and his team to validate the effectiveness of Microsoft’s machine reading technology while increasing the efficiency of Mockus and her team as they curate CKB.

“Leveraging the machine reader, we can say here is what we are interested in and it will help to triage and actually rank papers for us that have high clinical significance,” Mockus said. “And then a human goes in to really tease apart the data.”

Over time, feedback from the curators will be used to help train the machine reading technology, making the models more precise and, in turn, making the curators more efficient and allowing the scope of CKB to expand.

“We feel really, really good about this relationship,” said Nair. “Particularly from the standpoint of the impact it can have in providing a very powerful tool to clinicians.”

Related:

John Roach writes about Microsoft research and innovation. Follow him on Twitter.

Go to Original Article
Author: Microsoft News Center

USBAnywhere vulnerabilities put Supermicro servers at risk

Security researchers discovered a set of vulnerabilities in Supermicro servers that could allow threat actors to remotely attack systems as if they had physical access to the USB ports.

Researchers at Eclypsium, based in Beaverton, Ore., discovered flaws in the baseboard management controllers (BMCs) of Supermicro servers and dubbed the set of issues “USBAnywhere.” The researchers said authentication issues put servers at risk because “BMCs are intended to allow administrators to perform out-of-band management of a server, and as a result are highly privileged components.

“The problem stems from several issues in the way that BMCs on Supermicro X9, X10 and X11 platforms implement virtual media, an ability to remotely connect a disk image as a virtual USB CD-ROM or floppy drive. When accessed remotely, the virtual media service allows plaintext authentication, sends most traffic unencrypted, uses a weak encryption algorithm for the rest, and is susceptible to an authentication bypass,” the researchers wrote in a blog post. “These issues allow an attacker to easily gain access to a server, either by capturing a legitimate user’s authentication packet, using default credentials, and in some cases, without any credentials at all.”

The USBAnywhere flaws make it so the virtual USB drive acts in the same way a physical USB would, meaning an attacker could load a new operating system image, deploy malware or disable the target device. However, the researchers noted the attacks would be possible on systems where the BMCs are directly exposed to the internet or if an attacker already has access to a corporate network.

Rick Altherr, principal engineer at Eclypsium, told SearchSecurity, “BMCs are one of the most privileged components on modern servers. Compromise of a BMC practically guarantees compromise of the host system as well.”

Eclypsium said there are currently “at least 47,000 systems with their BMCs exposed to the internet and using the relevant protocol.” These systems would be at additional risk because BMCs are rarely powered off and the authentication bypass vulnerability can persist unless the system is turned off or loses power.

Altherr said he found the USBAnywhere vulnerabilities because he “was curious how virtual media was implemented across various BMC implementations,” but Eclypsium found that only Supermicro systems were affected.

According to the blog post, Eclypsium reported the USBAnywhere flaws to Supermicro on June 19 and provided additional information on July 9, but Supermicro did not acknowledge the reports until July 29.

“Supermicro engaged with Eclypsium to understand the vulnerabilities and develop fixes. Supermicro was responsive throughout and worked to coordinate availability of firmware updates to coincide with public disclosure,” Altherr said. “While there is always room for improvement, Supermicro responded in a way that produced an amicable outcome for all involved.”

Altherr added that customers should “treat BMCs as a vulnerable device. Put them on an isolated network and restrict access to only IT staff that need to interact with them.”

Supermicro noted in its security advisory that isolating BMCs from the internet would reduce the risk to USBAnywhere but not eliminate the threat entirely . Firmware updates are currently available for affected Supermicro systems, and in addition to updating, Supermicro advised users to disable virtual media by blocking TCP port 623.

Go to Original Article
Author:

Microsoft Investigator Fellowship seeks PhD faculty submissions

August 1, 2019 | By Jamie Harper, Vice-President, US Education

Microsoft is expanding its support for academic researchers through the new Microsoft Investigator Fellowship. This fellowship is designed to empower researchers of all disciplines who plan to make an impact with research and teaching using the Microsoft Azure cloud computing platform.

From predicting traffic jams to advancing the Internet of Things, Azure has continued to evolve with the times, and this fellowship aims to keep Azure at the forefront of new ideas in the cloud computing space. Similarly evolving, Microsoft fellowships have a long history of supporting researchers, seeking to promote diversity and promising academic research in the field of computing. This fellowship is an addition to this legacy that highlights the significance of Azure in education, both now and into the future.

Full-time faculty at degree-granting colleges or universities in the United States who hold PhDs are eligible to apply. This fellowship supports faculty who are currently conducting research, advising graduate students, teaching in a classroom, and plan to or currently use Microsoft Azure in research, teaching, or both.

Fellows will receive $100,000 annually for two years to support their research. Fellows will also be invited to attend multiple events during this time, where they will make connections with other faculty from leading universities and Microsoft. They will have the opportunity to participate in the greater academic community as well. Members of the cohort will also be offered various training and certification opportunities.

When reviewing the submissions, Microsoft will evaluate the proposed future research and teaching impact of Azure. This will include consideration of how the Microsoft Azure cloud computing platform will be leveraged in size, scope, or unique ways for research, teaching, or both.

Candidates should submit their proposals directly on the fellowship website by August 16, 2019. Recipients will be announced in September 2019.

We encourage you to submit your proposal! For more information on the Microsoft Investigator Fellowship, please check out the fellowship website.

Go to Original Article
Author: Microsoft News Center