Tag Archives: Protect

Jamf Protect offers visibility, protection for macOS admins

MINNEAPOLIS — Compliance and behavioral analysis features in endpoint security tool Jamf Protect may lessen IT concerns about adopting macOS devices in the enterprise.

Jamf Protect offers a kernel-less — or kextless — approach to endpoint security, which was announced here at Jamf Nation User Conference (JNUC) 2019, Jamf’s annual user conference. The platform offers day-one support of new macOS security features, insight into compliance across an organization’s fleet of macOS devices and behavior-based malware detection.

As the use of macOS in the enterprise increases, the landscape of security threats evolves, said David McIntyre, CISO and CTO of Build America Mutual, a financial services company in New York.

“There were so many more threats for Mac than I thought, so we had to add something to fight them off,” McIntyre said.

The origin of Jamf Protect

The announcement of a Jamf endpoint protection tool aligns with the company’s acquisition of Digita Security, a macOS endpoint security management company, earlier this year.

A lack of security management is one of the biggest hindrances to macOS adoption in the enterprise, said Patrick Wardle, co-founder at Digita Security and current principle security researcher at Jamf. Most enterprise organizations that consider deploying macOS devices have existing Windows machines that they manage, and as such they have a Windows-focused desktop management infrastructure.

“In an ideal world, the single pane of glass for Windows and Mac endpoint management would work, but feature parity is largely missing for the macOS components of these tools,” Wardle said.

What can Jamf Protect do?

Jamf Protect offers kextless management; instead of kernel extensions, it builds on the EndpointSecurity framework that Apple provides. Kext files extend Mac OS X kernels and can bloat a desktop with additional code. With the release of macOS 10.15 Catalina, Apple deprecated kernel extensions to encourage a kextless approach.

“It’ll be huge for us if we can get rid of apps that use kext files,” said Tom O’Mahoney, a systems support analyst at Home Advisor in Golden, Co. “Hopefully that’s the future of all desktop management.”

It’ll be huge for us if we can get rid of apps that use kext files — hopefully that’s the future of all desktop management.
Tom O’MahoneySystems support analyst, Home Advisor

Some kernel extensions only work with certain versions of Mac OS X and can prevent users from booting desktops after OS updates. Admins must troubleshoot this issue by searching through all of the OS’ kext files and determining which non-Apple kext file is causing the issue, as Apple automatically trusts kext files that have its developer ID.

“The kextless approach prevents a lot of issues that our current endpoint manager has with macOS updates,” said Brian Bocklett, IT engineer at Intercontinental Exchange, a financial services company in Atlanta, Ga.

Jamf Protect will also provide visibility into an organization’s entire macOS fleet. Admins can view the status of macOS devices’ security configurations and settings in the Insights tab of Jamf Protect and compare this data to endpoint security standards published by the Center for Internet Security (CIS).

Jamf Protect screenshot
Jamf Protect’s Insights tab

Michael Stover, a desktop engineer at Home Advisor, which has roughly a 90-10 split on Windows and macOS devices, said that macOS visibility is a common compliance issue.

“The CIS benchmarks are probably the biggest selling point for us,” he said. “It would be game-changing to see all that configuration data in one place and compare it to the benchmarks.”

The behavioral analysis style of macOS threat detection also drew some interest from JNUC 2019 attendees. This approach to malware detection identifies actions that files or software try to execute and searches for anomalies. If Jamf Protect finds instances of a phantom click, a common malware tactic, it can alert IT professionals to the suspicious behavior.

Jamf Protect forgoes attempts to recognize specific instances of malware; instead it recognizes the actions of potentially malicious software. Jamf Protect also detects software with an unfamiliar developer ID attempting to access data, install additional software or take actions that could invite malware onto a desktop.

“You don’t need to have every bank robber’s photo to know that someone running into a bank with a ski mask and a weapon is trying to rob that bank,” McIntyre said. 

Still, some aspects of Jamf Protect gave macOS admins pause, including the behavior analysis style of threat detection. In a Q&A after the Jamf Protect session ended, several attendees asked if the tool provides a more proactive approach for threat prevention and if Jamf Protect had any way to prevent false positives before they happen.

Spotify, for example, includes the suspicious phantom clicks as part of its UI, so users running Spotify could generate false positives. IT professionals can add exceptions to the behavioral analysis with Spotify and other similar cases, but it’s difficult to anticipate every exception they’ll need to add.

Additionally, some organizations require security standards far stricter than those of the CIS, and Jamf Protect doesn’t allow organizations to add their own benchmarks or customize the CIS benchmarks.

Jamf Protect is generally available as a paid subscription service for commerical U.S. customers, according to Jamf.

Go to Original Article
Author:

Recovering from ransomware soars to the top of DR concerns

The rise of ransomware has had a significant effect on modern disaster recovery, shaping the way we protect data and plan a recovery. It does not bring the same physical destruction of a natural disaster, but the effects within an organization — and on its reputation — can be lasting.

It’s no wonder that recovering from ransomware has become such a priority in recent years.

It’s hard to imagine a time when ransomware wasn’t a threat, but while cyberattacks date back as far as the late 1980s, ransomware in particular has had a relatively recent rise in prominence. Ransomware is a type of malware attack that can be carried out in a number of ways, but generally the “ransom” part of the name comes from one of the ways attackers hope to profit from it. The victim’s data is locked, often behind encryption, and held for ransom until the attacker is paid. Assuming the attacker is telling the truth, the data will be decrypted and returned. Again, this assumes that the anonymous person or group that just stole your data is being honest.

“Just pay the ransom” is rarely the first piece of advice an expert will offer. Not only do you not know if payment will actually result in your computer being unlocked, but developments in backup and recovery have made recovering from ransomware without paying the attacker possible. While this method of cyberattack seems specially designed to make victims panic and pay up, doing so does not guarantee you’ll get your data back or won’t be asked for more money.

Disaster recovery has changed significantly in the 20 years TechTarget has been covering technology news, but the rapid rise of ransomware to the top of the potential disaster pyramid is one of the more remarkable changes to occur. According to a U.S. government report, by 2016 4,000 ransomware attacks were occurring daily. This was a 300% increase over the previous year. Ransomware recovery has changed the disaster recovery model, and it won’t be going away any time soon. In this brief retrospective, take a look back at the major attacks that made headlines, evolving advice and warnings regarding ransomware, and how organizations are fighting back.

In the news

The appropriately named WannaCry ransomware attack began spreading in May 2017, using an exploit leaked from the National Security Agency targeting Windows computers. WannaCry is a worm, which means that it can spread without participation from the victims, unlike phishing attacks, which require action from the recipient to spread widely.

Ransomware recovery has changed the disaster recovery model, and it won’t be going away any time soon.

How big was the WannaCry attack? Affecting computers in as many as 150 countries, WannaCry is estimated to have caused hundreds of millions of dollars in damages. According to cyber risk modeling company Cyence, the total costs associated with the attack could be as high as $4 billion.

Rather than the price of the ransom itself, the biggest issue companies face is the cost of being down. Because so many organizations were infected with the WannaCry virus, news spread that those who paid the ransom were never given the decryption key, so most victims did not pay. However, many took a financial hit from the downtime the attack caused. Another major attack in 2017, NotPetya, cost Danish shipping giant A.P. Moller-Maersk hundreds of millions of dollars. And that’s just one victim.

In 2018, the city of Atlanta’s recovery from ransomware ended up costing more than $5 million, and shut down several city departments for five days. In the Matanuska-Susitna borough of Alaska in 2018, 120 of 150 servers were affected by ransomware, and the government workers resorted to using typewriters to stay operational. Whether it is on a global or local scale, the consequences of ransomware are clear.

Ransomware attacks
Ransomware attacks had a meteoric rise in 2016.

Taking center stage

Looking back, the massive increase in ransomware attacks between 2015 and 2016 signaled when ransomware really began to take its place at the head of the data threat pack. Experts not only began emphasizing the importance of backup and data protection against attacks, but planning for future potential recoveries. Depending on your DR strategy, recovering from ransomware could fit into your current plan, or you might have to start considering an overhaul.

By 2017, the ransomware threat was impossible to ignore. According to a 2018 Verizon Data Breach Report, 39% of malware attacks carried out in 2017 were ransomware, and ransomware had soared from being the fifth most common type of malware to number one.

Verizon malware report
According to the 2018 Verizon Data Breach Investigations Report, ransomware was the most prevalent type of malware attack in 2017.

Ransomware was not only becoming more prominent, but more sophisticated as well. Best practices for DR highlighted preparation for ransomware, and an emphasis on IT resiliency entered backup and recovery discussions. Protecting against ransomware became less about wondering what would happen if your organization was attacked, and more about what you would do when your organization was attacked. Ransomware recovery planning wasn’t just a good idea, it was a priority.

As a result of the recent epidemic, more organizations appear to be considering disaster recovery planning in general. As unthinkable as it may seem, many organizations have been reluctant to invest in disaster recovery, viewing it as something they might need eventually. This mindset is dangerous, and results in many companies not having a recovery plan in place until it’s too late.

Bouncing back

While ransomware attacks may feel like an inevitability — which is how companies should prepare — that doesn’t mean the end is nigh. Recovering from ransomware is possible, and with the right amount of preparation and help, it can be done.

The modern backup market is evolving in such a way that downtime is considered practically unacceptable, which bodes well for ransomware recovery. Having frequent backups available is a major element of recovering, and taking advantage of vendor offerings can give you a boost when it comes to frequent, secure backups.

Vendors such as Reduxio, Nasuni and Carbonite have developed tools aimed at ransomware recovery, and can have you back up and running without significant data loss within hours. Whether the trick is backdating, snapshots, cloud-based backup and recovery, or server-level restores, numerous tools out there can help with recovery efforts. Other vendors working in this space include Acronis, Asigra, Barracuda, Commvault, Datto, Infrascale, Quorum, Unitrends and Zerto.

Along with a wider array of tech options, more information about ransomware is available than in the past. This is particularly helpful with ransomware attacks, because the attacks in part rely on the victims unwittingly participating. Whether you’re looking for tips on protecting against attacks or recovering after the fact, a wealth of information is available.

The widespread nature of ransomware is alarming, but also provides first-hand accounts of what happened and what was done to recover after the attack. You may not know when ransomware is going to strike, but recovery is no longer a mystery.

Go to Original Article
Author:

Set up PowerShell script block logging for added security

PowerShell is an incredibly comprehensive and easy to use language. But administrators need to protect their organization from bad actors who use PowerShell for criminal purposes.

PowerShell’s extensive capabilities as a native tool in Windows make it tempting for an attacker to exploit the language. Increasingly, malicious software and bad actors are using PowerShell to either glue together different attack methods or run exploits entirely through PowerShell.

There are many methods and security best practices available to secure PowerShell, but one of the most valued is PowerShell script block logging. Script blocks are a collection of statements or expressions used as a single unit. Users denote them by everything inside the curly brackets within the PowerShell language.

Starting in Windows PowerShell v4.0 but significantly enhanced in Windows PowerShell v5.0, script block logging produces an audit trail of executed code. Windows PowerShell v5.0 introduced a logging engine that automatically decrypts code that has been obfuscated with methods such as XOR, Base64 and ROT13. PowerShell includes the original encrypted code for comparison.

PowerShell script block logging helps with the postmortem analysis of events to give additional insights if a breach occurs. It also helps IT be more proactive with monitoring for malicious events. For example, if you set up Event Subscriptions in Windows, you can send events of interest to a centralized server for a closer look.

Set up a Windows system for logging

Two primary ways to configure script block logging on a Windows system are by either setting a registry value directly or by specifying the appropriate settings in a group policy object.

To configure script block logging via the registry, use the following code while logged in as an administrator:

New-Item -Path "HKLM:SOFTWAREWow6432NodePoliciesMicrosoftWindowsPowerShellScriptBlockLogging" -Force
Set-ItemProperty -Path "HKLM:SOFTWAREWow6432NodePoliciesMicrosoftWindowsPowerShellScriptBlockLogging" -Name "EnableScriptBlockLogging" -Value 1 -Force

You can set PowerShell logging settings within group policy, either on the local machine or through organizationwide policies.

Open the Local Group Policy Editor and navigate to Computer Configuration > Administrative Templates > Windows Components > Windows PowerShell > Turn on PowerShell Script Block Logging.

Turning on PowerShell script block logging
Set up PowerShell script block logging from the Local Group Policy Editor in Windows.

When you enable script block logging, the editor unlocks an additional option to log events via “Log script block invocation start / stop events” when a command, script block, function or script starts and stops. This helps trace when an event happened, especially for long-running background scripts. This option generates a substantial amount of additional data in your logs.

PowerShell script block logging option
PowerShell script block logging tracks executed scripts and commands run on the command line.

How to configure script block logging on non-Windows systems

PowerShell Core is the cross-platform version of PowerShell for use on Windows, Linux and macOS. To use script block logging on PowerShell Core, you define the configuration in the powershell.config.json file in the $PSHome directory, which is unique to each PowerShell installation.

From a PowerShell session, navigate to $PSHome and use the Get-ChildItem command to see if the powershell.config.json file exists. If not, create the file with this command:

sudo touch powershell.config.json

Modify the file using a tool such as the nano text editor and paste in the following configuration.

{
"PowerShellPolicies": {
"ScriptBlockLogging": {
"EnableScriptBlockInvocationLogging": false,
"EnableScriptBlockLogging": true
}
},
"LogLevel": "verbose"
}

Test PowerShell script block logging

Testing the configuration is easy. From the command line, run the following:

PS /> { "log me!" }
"log me!"

Checking the logs on Windows

How do you know what entries to watch out for? The main event ID to watch out for is 4104. This is the ScriptBlockLogging entry for information that includes user and domain, logged date and time, computer host and the script block text.

Open Event Viewer and navigate to the following log location: Applications and Services Logs > Microsoft > Windows > PowerShell > Operational.

Click on events until you find the one from the test that is listed as Event ID 4104. Filter the log for this event to make the search quicker.

Windows Event 4104
Event 4104 in the Windows Event Viewer details PowerShell activity on a Windows machine.

On PowerShell Core on Windows, the log location is: Applications and Services Logs > PowerShellCore > Operational.

Log location on non-Windows systems

On Linux, PowerShell script block logging will log to syslog. The location will vary based on the distribution. For this tutorial, we use Ubuntu which has syslog at /var/log/syslog.

Run the following command to show the log entry; you must elevate with sudo in this example and on most typical systems:

sudo cat /var/log/syslog | grep "{ log me! }"

2019-08-20T19:40:08.070328-05:00 localhost powershell[9610]: (6.2.2:9:80) [ScriptBlock_Compile_Detail:ExecuteCommand.Create.Verbose] Creating Scriptblock text (1 of 1):#012{ "log me!" }#012#012ScriptBlock ID: 4d8d3cb4-a5ef-48aa-8339-38eea05c892b#012Path:

To set up a centralized server on Linux, things are a bit different since you’re using syslog by default. You can use rsyslog to ship your logs to a log aggregation service to track PowerShell activity from a central location.

Go to Original Article
Author:

Create and configure a shielded VM in Hyper-V

Creating a shielded VM to protect your data is a relatively straightforward process that consists of a few simple steps and PowerShell commands.

A shielded VM depends on a dedicated server separate from the Hyper-V host that runs the Host Guardian Service (HGS). The HGS server must not be domain-joined because it is going to take on the role of a special-purpose domain controller. To install HGS, open an administrative PowerShell window and run this command:

Install-WindowsFeature -Name HostGuardianServiceRole -Restart

Once the server reboots, create the required domain. Here, the password is [email protected] and the domain name is PoseyHGS.net. Create the domain by entering these commands:

$AdminPassword = ConvertTo-SecureString -AsPlainText ‘[email protected]’ -Force

Install-HgsServer -HgsDomainName ‘PoseyHGS.net’ -SafeModeAdministratorPassword $AdminPassword -Restart

Install the HGS server.
Figure A. This is how to install the Host Guardian Service server.

The next step in the process of creating and configuring a shielded VM is to create two certificates: an encryption certificate and a signing certificate. In production, you must use certificates from a trusted certificate authority. In a lab environment, you can use self-signed certificates, such as those used in the example below. To create these certificates, use the following commands:

$CertificatePassword = ConvertTo-SecureString -AsPlainText ‘[email protected]’ -Force
$SigningCert = New-SelfSignedCertificate -DNSName “signing.poseyhgs.net”
Export-PfxCertificate -Cert $SigningCert -Password $CertificatePassword -FilePath ‘c:CertsSigningCert.pfx’
$EncryptionCert=New-SelfSignedCertificate -DNSName “encryption.poseyhgs.net”
Export-PfxCertificate -Cert $EncryptionCert -Password $CertificatePassword -FilePath ‘C:certsEncryptionCert.pfx’

Create the certificates.
Figure B. This is how to create the required certificates.

Now, it’s time to initialize the HGS server. To perform the initialization process, use the following command:

Initialize-HGSServer -HGSServiceName ‘hgs’ -SigningCertificatePath ‘C:certsSigningCert.pfx’ -SigningCertificatePassword $CertificatePassword -EncryptionCertificatePath ‘C:certsEncryptionCert.pfx’ -EncryptionCertificatePassword $CertificatePassword -TrustTPM

The initialization process
Figure C. This is what the installation process looks like.

The last thing you need to do when provisioning the HGS server is to set up conditional domain name service (DNS) forwarding. To do so, use the following commands:

Add-DnsServerConditionalForwardZone -Name “PoseyHDS.net” -ReplicationScope “Forest” -MasterServers

Netdom trust PoseyHDS.net /domain:PoseyHDS.net /userD:PoseyHDS.netAdministrator /password: /add

In the process of creating and configuring a shielded VM, the next step is to add the guarded Hyper-V host to the Active Directory (AD) domain that you just created. You must create a global AD security group called GuardedHosts. You must also set up conditional DNS forwarding on the host so the host can find the domain controller.

Once all of that is complete, retrieve the security identifier (SID) for the GuardedHosts group, and then add that SID to the HGS attestation host group. From the domain controller, enter the following command to retrieve the group’s SID:

Get-ADGroup “GuardedHosts” | Select-Object SID

Once you know the SID, run this command on the HGS server:

Add-HgsAttestationHostGroup -Name “GuardedHosts” -Identifier “

Now, it’s time to create a code integrity policy on the Hyper-V server. To do so, enter the following commands:

New-CIPPolicy -Level FilePublisher -Fallback Hash -FilePath ‘C:PolicyHWLCodeIntegrity.xml’

ConvertFrom-CIPolicy -XMLFilePath ‘C:PolicyHwlCodeIntegrity.xml’ -BinaryFilePath ‘C:PolicyHWLCodeIntegrity.p7b’

Now, you must copy the P7B file you just created to the HGS server. From there, run this command:

Add-HGSAttestationCIPolicy -Path ‘C:HWLCodeIntegrity.p7b’ -Name ‘StdGuardHost’

Get-HGSServer

At this point, the server should display an attestation URL and a key protection URL. Be sure to make note of both of these URLs. Now, go back to the Hyper-V host and enter this command:

Set-HGSClientConfiguration -KeyProtectionServerURL “” -AttestationServerURL “

To wrap things up on the Hyper-V server, retrieve an XML file from the HGS server and import it. You must also define the host’s HGS guardian. Here are the commands to do so:

Invoke-WebRequest “/service/metadata/2014-07/metadata.xml” -OutFile ‘C:certsmetadata.xml’

Import-HGSGuardian -Path ‘C:certsmetadata.xml’ -Name ‘PoseyHGS’ -AllowUntrustedRoot

Shield a Hyper-V VM.
Figure D. Shield a Hyper-V VM by selecting a single checkbox.

Once you import the host guardian into the Hyper-V server, you can use PowerShell to configure a shielded VM. However, you can also enable shielding directly through the Hyper-V Manager by selecting the Enable Shielding checkbox on the VM’s Settings screen, as shown in Figure D above.

Google Cloud security adds data regions and Titan security keys

Multiple improvements for Google Cloud security aim to help users protect data through better access management, more data security options

and
greater transparency.

More than half of the security features announced are either in beta or part of the G Suite Early Adopter Program, but in total the additions should offer better control and transparency for users.

The biggest improvement in Google Cloud security comes in identity and access management. Google has developed its own Titan multi-factor physical security key — similar to a YubiKey — to protect users against phishing attacks. Google previously reported that there have been no confirmed account takeovers in more than one year since requiring all employees to use physical security keys, and according to a Google spokesperson, Titan keys have already been one such key available to employees.

The Titan security keys are FIDO keys that include “firmware developed by Google to verify its integrity.” Google announced it is offering two models of Titan keys for Cloud users: one based on USB and NFC and one that uses Bluetooth in order to support iOS devices as well. The keys are available now to Cloud customers and will come to the Google Store soon. Pricing details have not been released.

“The Titan security key provides a phishing-resistant second factor of authentication. Typically, our customers will place it in front of

high value
users or content administrators and root users, the compromise of those would be much more damaging to an enterprise customer … or specific applications which contain sensitive data, or sort of the crown jewels of corporate environments,” Jess Leroy, director of product management for Google Cloud, told reporters in a briefing. “It’s built with a secure element, which includes firmware that we built ourselves, and it provides a ton of security with very little interaction and effort on the part of

user
.”

However, Stina Ehrensvard, CEO

and
founder of Yubico, the manufacturer of Yubikey two factor authentication keys, headquartered in Palo Alto, Calif., noted in a blog post that her company does not see Bluetooth as a good option for a physical security key.

“Google’s offering includes a Bluetooth (BLE) capable key. While Yubico previously initiated development of a BLE security

key,
and contributed to the BLE U2F standards work, we decided not to launch the product as it does not meet our standards for security, usability

and
durability,” Ehrensvard wrote. “BLE does not provide the security assurance levels of NFC and USB, and requires batteries and pairing that offer a poor user experience.”

In addition to the Titan keys, Google Cloud security will have improved access management with the implementation of the context-aware access approach Google used in its BeyondCorp network setups.

“Context-aware access allows organizations to define and enforce granular access to [Google Cloud Platform] APIs, resources, G Suite, and third-party SaaS apps based on a user’s identity, location, and the context of their request. This increases your security posture while decreasing complexity for your users, giving them the ability to seamlessly log on to apps from anywhere and any device,” Jennifer Lin, director of product management for Google Cloud, wrote in the Google Cloud security announcement post. “Context-aware access capabilities are available for select customers using VPC Service Controls, and are coming soon for customers using Cloud Identity and Access Management (IAM), Cloud Identity-Aware Proxy (IAP), and Cloud Identity.”

Data transparency and control

New features also aim to improve Google Cloud security visibility and control over data. Access Transparency will offer users a “near real-time log” of the actions taken by administrators, including Google engineers.

“Inability to audit cloud provider accesses is often a barrier to moving to

cloud
. Without visibility into the actions of cloud provider administrators, traditional security processes cannot be replicated,” Google wrote in

documentation
. “Access Transparency enables that verification, bringing your audit controls closer to what you can expect

on premise
.”

In terms of Google Cloud security and control over data, Google will also now allow customers to decide in what region data will be stored. Google described this feature as allowing multinational organizations to protect their data with

geo redundancy
, but in a way that organizations can follow any requirements regarding where in the

world
data is stored.

A Google spokesperson noted via email that the onus for ensuring that regional data storage complies with local laws would be on the individual organizations.

Other Google Cloud security improvements

Google announced several features that are still in beta, including Shielded Virtual Machines (VM, which will allow users to monitor and react to changes in the VM to protect against tampering; Binary Authorization, which will force signature validation when deploying container images; Container Registry Vulnerability Scanning, which will automatically scan Ubuntu, Debian and Alpine images to prevent deploying images that contain any vulnerable packages; geo-based access control for Cloud Armor, which helps defend users against DDoS attacks; and Cloud HSM, a managed cloud-hosted hardware security module (HSM) service.

Chrome site isolation arrives to mitigate Spectre attacks

Version 67 of Google Chrome enabled site isolation by default in an effort to protect users against Spectre-based attacks.

Google has been testing Chrome site isolation since version 63, but has now decided the feature is ready for prime time to help mitigate Spectre attacks. Google described Chrome site isolation as a “large change” to the browser’s architecture “that limits each renderer process to documents from a single site. As a result, Chrome can rely on the operating system to prevent attacks between processes, and thus, between sites.”

“When site isolation is enabled, each renderer process contains documents from at most one site. This means all navigations to cross-site documents cause a tab to switch processes,” Charlie Reis, site isolator at Google, wrote in a blog post. “It also means all cross-site iframes are put into a different process than their parent frame, using ‘out-of-process iframes.’ Splitting a single page across multiple processes is a major change to how Chrome works, and the Chrome Security team has been pursuing this for several years, independently of Spectre.”

This is a major change to the previous multi-process architecture in Chrome in which there were ways to connect to other sites in the same process using iframes or cross-site pop-ups. Reis noted there are still ways an attacker could access cross-site URLs even with Chrome site isolation enabled; he warned developers to ensure “resources are served with the right MIME type and with the nosniff response header,” in order to minimize the risk of data leaks.

A source close to Google described the aim of Chrome site isolation as an effort to protect the most sensitive data, so even if new variants of Spectre or other side-channel attacks are discovered, the attack may be successful but Chrome will keep things worth stealing out of reach.

Brandon Czajka, vice CIO at Switchfast Technologies, said it’s reassuring to see Google “lead the field” by developing new features such as Chrome site isolation.

“Google’s site isolation appears to work as a means of separation. Rather than allowing Chrome to process data for all websites opened under a single renderer, site isolation separates the rendering process to limit a sites access to user data that may have been entered on other sites (or in other words, increases confidentiality),” Czajka wrote via email. “So, while a user could still fall victim to a Spectre attack, its scope should be more limited to just the malicious site rather than affording it unlimited access.”

Chrome site isolation has been enabled for 99% of users on Windows, Mac, Linux and Chrome OS, according to Google, with Android support still in the works. However, the added protection and increased number of processes will require more system resources.

“Site isolation is a significant change to Chrome’s behavior under the hood, but it generally shouldn’t cause visible changes for most users or web developers (beyond a few known issues),” Reis wrote. “Site isolation does cause Chrome to create more renderer processes, which comes with performance tradeoffs: on the plus side, each renderer process is smaller, shorter-lived, and has less contention internally, but there is about a 10-13% total memory overhead in real workloads due to the larger number of processes.”

Czajka said while performance may be one of the most important aspects for any business, “it is just one piece of the puzzle.”

“While Google’s site isolation may require more memory, and thus may slow browser performance, it is these type of security measures that help to secure the confidentiality and integrity of user data,” Czajka wrote.

New Aquatic Skins Out Today!

We’re supporting the incredible work of The Nature Conservancy to protect and restore these coral reefs (which you can learn more about here). Both from sales of this new skin pack and with the promise to donate more money for every coral block YOU place in the game.

It’s true! As soon as players have collectively placed ten million coral blocks underwater in Minecraft, we’ll donate one hundred thousand dollars to The Nature Conservancy and their efforts. We’ve got no doubt you’ll manage it in no time!

So what are you waiting for? Build something amazing out of coral, help us help the oceans and enjoy the new skin pack!

Learn more about the Nature Conservancy by clicking this lovely line of green text.

IMPORTANT LEGAL STUFF:

Net proceeds from sales of the Coral Crafter Skin Pack excluding platform and marketplace operating fees will be donated to The Nature Conservancy, 4245 North Fairfax Drive, Suite 100, Arlington, VA, 22203-1606, USA, www.nature.org. No portion of purchase or gift is tax deductible.

Minecraft will contribute $100,000.00 to The Nature Conservancy to protect and restore coral reefs around the world once players have placed 10 million coral blocks underwater, beginning on June 8th. (Coral blocks are only counted in Minecraft versions without “Edition” in the title.) The mission of The Nature Conservancy is to conserve the land and waters on which all life depends. More information about the Conservancy is available at www.nature.org.

We’ll update on Twitter when ten million coral blocks have been placed.

New Aquatic Skins Out Today!

We’re supporting the incredible work of The Nature Conservancy to protect and restore these coral reefs (which you can learn more about here). Both from sales of this new skin pack and with the promise to donate more money for every coral block YOU place in the game.

It’s true! As soon as players have collectively placed ten million coral blocks underwater in Minecraft, we’ll donate one hundred thousand dollars to The Nature Conservancy and their efforts. We’ve got no doubt you’ll manage it in no time!

So what are you waiting for? Build something amazing out of coral, help us help the oceans and enjoy the new skin pack!

Learn more about the Nature Conservancy by clicking this lovely line of green text.

IMPORTANT LEGAL STUFF:

Net proceeds from sales of the Coral Crafter Skin Pack excluding platform and marketplace operating fees will be donated to The Nature Conservancy, 4245 North Fairfax Drive, Suite 100, Arlington, VA, 22203-1606, USA, www.nature.org. No portion of purchase or gift is tax deductible.

Minecraft will contribute $100,000.00 to The Nature Conservancy to protect and restore coral reefs around the world once players have placed 10 million coral blocks underwater, beginning on June 8th. (Coral blocks are only counted in Minecraft versions without “Edition” in the title.) The mission of The Nature Conservancy is to conserve the land and waters on which all life depends. More information about the Conservancy is available at www.nature.org.

We’ll update on Twitter when ten million coral blocks have been placed.