Despite considerable hype, 5G connectivity likely won’t have a big impact on the enterprise this year.
Although several 5G-ready, enterprise-focused devices were announced recently, including the Dell Latitude 9510 and the 2020 HP Elite Dragonfly G2 model, analysts said the fifth generation cellular network technology itself is not enterprise-ready.
This slower rollout may be to the benefit of IT professionals, however. With 5G connectivity looming on the horizon, experts said, there is an opportunity for IT admins to contribute to strategic plans for how the technology will be implemented when it does arrive.
Bill Menezes, senior principal analyst at Gartner, said the expansion of nationwide cellular and data 5G networks is still a work in progress. Such networks, he said, have been built out in some smaller countries like South Korea, where there is limited landmass to cover and a regulatory impetus to upgrade. In the U.S. and Europe, the networks have grown, but coverage is not as ubiquitous as it needs to be for businesses to rely on it.
Device support is another limiting factor. Menezes said iOS has the dominant share of mobile devices in the business world; until an iPad or iPhone supports the technology, organizations using iOS will be without a 5G connectivity option. With 5G-capable laptops, he said, IT should weigh the benefit of having the technology available on those relatively long-lived devices against whether their employees would see any short-term gains through it.
Holger Mueller, principal analyst at Constellation Research Inc., said he did not expect much of a 5G effect on the enterprise in 2020.
“The only impact [this year] is that customers will have to shell out more money for an iPhone, if an iPhone comes out that supports 5G,” he said.
The technology has the potential to bring about some fundamental changes, whether it involves controlling thousands of devices on the factory floor or guiding self-driving cars, but substantial work must take place before that occurs, according to Mueller. He anticipates that the rollout will be slower than the upgrade from 3G to 4G, and the early benefits will be seen in the more densely populated areas rather than nationwide.
While 4G relied on cell towers covering a wide radius, 5G is expected to use a larger number of smaller stations, as the millimeter wave spectrum upon which it operates only works over short distances. Mueller said a substantial amount of buildout must take place before the service is available across the country.
Forrester Research analyst Dan Bieler described 2020 as the early days of 5G; IT, at this point, may be just starting to think about the technology and the impact it could have on operations.
“5G is one of the technologies they need to keep an eye on, in terms of the use cases they need to support,” he said.
When it does arrive, how will businesses use 5G?
The most credible interest in 5G currently, Bieler said, comes from the manufacturing sector. The technology, given its low latency, would be able to replace the data cables leading to factory machinery. In one scenario he mentioned, stationary and mobile robots could work in concert to complete tasks; given the need for perfect harmony in such an instance, a difference of a few milliseconds of latency is important.
Menezes said several industries, such as healthcare, are poised to benefit from 5G connectivity.
“There are some more compelling use cases than others, [like] anything related to large, dense data files that people need to upload or download in a mobile setting,” he said.
Remote medical care is one such use; sending high-resolution images to an off-site physician could speed up diagnoses and treatments.
Like Bieler, Menezes also suggested VR and AR could be better powered by 5G connectivity. A worker repairing or maintaining field equipment could use augmented reality to access documentation or seek real-time human assistance in resolving a problem.
As with 5G, AR may yet bring about a shift in business, but several obstacles remain in rolling out the technology. The software managing such devices, for instance, must be able to handle proprietary information securely. Some firms have begun to offer management tools for AR headsets, like Lenovo with its ThinkReality platform.
How will 5G impact IT admins?
Menezes said implementing 5G would entail a learning curve for IT professionals, although the projected slow adoption of the technology could help with that. If 5G connectivity brings about more sophisticated and demanding applications for workers, IT admins could have the chance to participate in the ramp-up.
According to Bieler, IT’s role in bringing 5G into the enterprise will reflect a general change to the profession’s character.
Dan BielerAnalyst, Forrester Research
“I think it’s part of a broader shift in what is expected out of IT professionals,” he said. “The days are gone when someone tells them to implement a technology and they’re just responsible for the rollout.”
Bieler said companies now expect IT to take part in planning and strategy, as part of the overall effort to achieve business objectives and target priorities. Instead of merely selecting a carrier, he said, IT professionals are being asked to evaluate where mobile makes sense and what kind of processes need to be supported.
“IT managers and teams have to become much more strategically involved,” he said.
Mueller said he anticipated 5G would affect some parts of an IT admin’s job — tracking devices, for example — but did not foresee a fundamental shift in the profession. The technology, he said, has been overhyped thus far, although he would be happy to be proven wrong.
If your defenses and backups fail despite your best efforts, your ransomware recovery effort can take one of several paths to restore normalcy to your organization.
Ransomware is bad enough. Don’t rush to bring systems and workloads back online and cause additional problems. The first item on your agenda is to take inventory of what still functions and what needs repairs. This has to be done quickly, but without mistakes. Management will want to know what needs to be done, but you can’t give a report until you have a full understanding. While you don’t need to break down every single server, you will need to have everything categorized. Think Active Directory, file servers, backups, networking infrastructure, email and communication, and production servers to start.
Take stock of the situation
The list of affected systems and VMs won’t be comprehensive. You have to start with machines that are a priority, and production servers are not in this case. If Active Directory is down, then it’s a safe bet most of your production servers — and the IT infrastructure — won’t be running correctly even if they weren’t directly affected.
To start with a ransomware recovery effort, check your backups first before anywhere else. Too many folks have deleted encrypted VMs only to find the malware wiped out their backup systems and end up going from bad to worse. Mistakes happen when you rush.
A somewhat easy path of restoring servers does exist if your backups are intact, current and operational. The restoration process needs to be tested before you delete any VMs. Rather than removing affected machines, try relocating them to lower-tier storage, external storage or even local storage on a host. Your goal is to get the encrypted VMs out of the way to give yourself space to work, then try the restores and get the VMs running before you remove their encrypted counterpart.
It might be time to make difficult choices
If the attack corrupted your backup system or the ransomware recovery effort failed, then someone above your pay grade will have to make some decisions. You will have to have a few difficult conversations, partly because the responsibility of the backups — and their reliability — rested on you. It’s possible it’s not entirely your fault for different reasons, such as not getting proper funding. This will have to be a conversation for a later time. At the moment, it’s time to make a decision: Pay the ransom, rebuild the systems or file a report.
Reporting requires the involvement of senior management and the company legal team. If you work for a government entity or public company, then you might have very specific guidelines that you must follow for legal reasons. If you work for a private company, then you still have possible legal issues with your customers about what you can and cannot disclose. No matter what you say, it will not be taken well. You want to be honest with your customers, but you also need to be mindful and limit how much data you share publicly.
The other aspect to reporting involves the authorities. Your organization might not even have been the intended target if you were hit by an older ransomware variant. If that’s the case, it’s possible there might be a decryption tool. It’s a long shot, but something worth check before you rebuild from scratch.
While distasteful, paying the ransomware is also an option. You need to consider how much will it cost to rebuild and recover versus handing over the ransom. It’s not an easy call to make because a payment does not come with any guarantees.
Most companies that pay the ransom typically don’t disclose that they paid or that they were even attacked. I suspect most organizations get their data unlocked, otherwise the ransomware business model would collapse.
The challenge with rebuilding is the effort involved. There are relatively few companies that have people who fully understand how every aspect of their environments work. Many IT infrastructures are the combined result of in-house experts and outside consultants. People install systems and take that knowledge with them when they leave. Their replacements learn how to keep these systems online, but that is very different from installing or building them from scratch. Repairing Active Directory is a challenge, but to rebuild an Active Directory with thousands of users and groups with permissions from documentation — with any luck — is next to impossible unless you have a lot of time and expertise.
Recovering from a ransomware attack is not an easy task, because not every situation is identical. If your defenses and backup recovery fail, the reconstruction effort will not be easy or cheap. You will either have to pay the ransom or spend money in overtime and consultants to rebuild mission-critical systems. Chances are your customers will find out what is happening during this recovery process, so you’ll have to have a communication plan and a single point of contact for the sake of consistency.
Ransomware isn’t something just for the IT department to handle; the decisions and the road to recovery will involve several stakeholders and real costs. Plan ahead and map out your steps to avoid rushing into bad choices that can’t be reversed.
Despite all the promise of cloud computing, it remains out of reach for administrators who cannot, for different reasons, migrate out of the data center.
Many organizations still grapple with concerns, such as compliance and security, that weigh down any aspirations to move workloads from on-premises environments. For these organizations, hyper-converged infrastructure (HCI) products have stepped in to approximate some of the perks of the cloud, including scalability and high availability. In early 2019, Microsoft stepped into this market with Azure Stack HCI. While it was a new name, it was not an entirely new concept for the company.
Some might see Azure Stack HCI as a mere rebranding of the existing Windows Server Software-Defined (WSSD) program, but there are some key differences that warrant further investigation from shops that might benefit from a system that integrates with the latest software-defined features in the Windows Server OS.
What distinguishes Azure Stack HCI from Azure Stack?
When Microsoft introduced its Azure Stack HCI program in March 2019, there was some initial confusion from many in IT. The company offered a similarly named product called Azure Stack, which uses the name of Microsoft’s cloud platform, to run a version of Azure inside the data center.
Microsoft developed Azure Stack HCI for local VM workloads that run on Windows Server 2019 Datacenter edition. While not explicitly tied to the Azure cloud, organizations that use Azure Stack HCI can connect to Azure for hybrid services, such as Azure Backup and Azure Site Recovery.
Azure Stack HCI offerings use OEM hardware from vendors such as Dell, Fujitsu, Hewlett Packard Enterprise and Lenovo that is validated by Microsoft to capably run the range of software-defined features in Windows Server 2019.
How is Azure Stack HCI different from the WSSD program?
While Azure Stack is essentially an on-premises version of the Microsoft cloud computing platform, its approximate namesake, Azure Stack HCI, is more closely related to the WSSD program that Microsoft launched in 2017.
Microsoft made its initial foray into the HCI space with its WSSD program, which utilized the software-defined features in the Windows Server 2016 Datacenter edition on hardware validated by Microsoft.
Windows Server gives administrators the virtualization layers necessary to avoid the management and deployment issues related to proprietary hardware. Windows Server’s software-defined storage, networking and compute capabilities enable organizations to more efficiently pool the hardware resources and use centralized management to sidestep traditional operational drawbacks.
For Azure Stack HCI, Microsoft uses the Windows Server 2019 Datacenter edition as the foundation of this product with updated software-defined functionality compared to Windows Server 2016. For example, Windows Server 2019 offers expanded pooled storage of 4 petabytes in Storage Spaces Direct, compared to 1 PB on Windows Server 2016. Microsoft also updated the clustering feature in Windows Server 2019 for improved workload resiliency and added data deduplication to give an average of 10 times more storage capacity than Windows Server 2016.
What are the deployment and management options?
The Azure Stack HCI product requires the use of the Windows Server 2019 Datacenter edition, which the organization might get from the hardware vendor for a lower cost than purchasing it separately.
To manage the Azure Stack HCI system, Microsoft recommends using Windows Admin Center, a relatively new GUI tool developed as the potential successor to Remote Server Administration Tools, Microsoft Management Console and Server Manager. Microsoft tailored Windows Admin Center for smaller deployments, such as Azure Stack HCI.
Windows Admin Center encapsulates a number of traditional server management utilities for routine tasks, such as registry edits, but it also handles more advanced functions, such as the deployment and management of Azure services, including Azure Network Adapter for companies that want to set up encryption for data transmitted between offices.
Companies that purchase an Azure Stack HCI system get Windows Server 2019 for its virtualization technology that pools storage and compute resources from two nodes up to 16 nodes to run VMs on Hyper-V. Microsoft positions Azure Stack HCI as an ideal system for multiple scenarios, such as remote office/branch office and VDI, and for use with data-intensive applications, such as Microsoft SQL Server.
How much does it cost to use Azure Stack HCI?
The Microsoft Azure Stack HCI catalog features more than 150 models from 20 vendors. A general-purpose node will cost about $10,000, but the final price will vary depending on the level of customization the buyer wants.
There are multiple server configuration options that cover a range of processor models, storage types and networking. For example, some nodes have ports with 1 Gigabit Ethernet, 10 GbE, 25 GbE and 100 GbE, while other nodes support a combination of 25 GbE and 10 GbE ports. Appliances optimized for better performance that use all-flash storage will cost more than units with slower, traditional spinning disks.
On top of the price of the hardware is the annual maintenance and support fees that are typically a percentage of the purchase price of the appliance.
If a company opts to tap into the Azure cloud for certain services, such as Azure Monitor to assist with operational duties by analyzing data from applications to determine if a problem is about to occur, then additional fees will come into play. Organizations that remain fixed with on-premises use for their Azure Stack HCI system will avoid these extra costs.
Despite a public pledge of “zero tolerance” for malicious activity, a digital ad network previously tied to major malvertising campaigns was still connecting to a malicious IP address involved in traffic hijacking.
Adsterra, an ad network based in Cyprus, was implicated in an extensive malvertising campaign discovered by Check Point Software Technologies in 2018. Adsterra claimed to have blocked the malicious activity and improved its defenses, but a SearchSecurity investigation discovered the ad network continued connecting to a malicious server used in the campaign as recently as last month.
The campaign originally began with a party, dubbed “Master134” by Check Point researchers, posing as a legitimate publisher on Adsterra’s ad network platform. Master134 used more than 10,000 compromised WordPress sites to redirect visitors to a malicious sever in Ukraine with the IP address 18.104.22.168. The hijacked traffic was sold on Adsterra’s RTB platform to other ad networks, where it was sold to other networks before being sold yet again to threat actors running several well-known malicious sites and exploit kits.
In Check Point’s report, researchers described Adsterra as “infamous” and said the ad network had a direct relationship with “Master134” by paying the threat actor for the hijacked traffic. Lotem Finkelsteen, Check Point’s threat intelligence analysis team leader and co-author of the report, told SearchSecurity that Adsterra either knew it was accepting hijacked traffic or chose to ignore the signs.
Adsterra responded to the report with a blog post titled “Zero Tolerance for Illegal Traffic Sources,” in which the company denied the allegations that it was knowingly involved with Master134. The company also blamed other third-party ad networks, even though Check Point reported Adsterra received the traffic directly from Master134’s IP address.
“[W]e would like to emphasize that we do not accept traffic from hacked/hijacked sites. We have zero tolerance for illegal traffic sources,” the statement read. “All publishers’ accounts that were mentioned in that article have been suspended. Malware ads are prohibited in Adsterra Network and we have a monitor system that checks all campaigns and stops all suspicious activity.”
Despite the denials and the supposed actions taken by Adsterra, a SearchSecurity investigation found the ad network was still connecting to the 22.214.171.124 IP address as recently as last month. When confronted with this information, Adsterra offered a series of explanations that called into question the company’s efforts to prevent malvertising and ad fraud.
Open source intelligence tools revealed the 126.96.36.199 IP address, which is still active, was connecting to ecpms.net, a redirection domain owned and operated by Adsterra, during July and August of this year.
SearchSecurity emailed Adsterra in August about the domain’s connection to the Master134 IP address and received a reply from the company’s support team, which said the Adsterra policy team would investigation the issue. The email also said the company “considers the [Master134] case closed.”
We sent a follow-up email to Adsterra asking for more information about how it bans malicious accounts and what steps the company takes to prevent repeat offenders from abusing Adsterra’s self-service platform.
Adsterra Support Team
“When we ‘ban an account’ in our system we block the account and all payments associated with that account. We also block all ads being displayed to that account,” the support team wrote. “We investigate all incoming reports on illegal activities on our network and do our best to prevent them from happening. We utilize special software (both in-house and 3rd party) to scan and monitor ads and traffic 24/7. Furthermore, after the incident with ‘Master134’ we have purchased additional 3rd party software to scan our feed, but you should understand that it is always a cat-mouse game when it comes to catching a ‘bad actor’.”
SearchSecurity also asked Adsterra about the allegations that the ad network was knowingly accepting traffic from malicious sources like Master134. “We serve hundreds of millions of ad impressions per day and we don’t need any illegal traffic because our advertisers simply won’t accept it and pay for it,” the support team wrote.
While the ecpms.net’s connections to Master134 appeared to end following the conversation with Adsterra’s support team, SearchSecurity discovered a second domain owned by the company, 7fkm2r4pzi.com, was also connecting to the malicious IP address. According to RiskIQ’s Passive Total Community Edition, the connections from 188.8.131.52 to the domain began in August shortly after the connections to ecpms.net ceased.
SearchSecurity emailed Adsterra again several times about the second domain, but the company did not respond initially. We then reached out to the ad network’s official Twitter account and asked why the Adsterra domains were still connecting to the Master134 server. In a Twitter exchange, Adsterra said the Master134 threat actors set up a new account, which was also banned. The ad network also said it “blacklisted all traffic with this IP in referrer header.”
“They’ll think twice before sending traffic to our network after no payment,” Adsterra said.
We asked why Adsterra hadn’t taken the step of banning the IP address last year following Check Point’s Master134 and the resulting press coverage, especially since the company said it had “zero tolerance” for such activity.
“Since the publisher’s account was banned without a payout and they removed our link shortly after, we considered they understood their traffic is not welcome here. It took them a while to sign up again,” Adsterra tweeted. “Please also note that blacklisting this IP in a referrer header does not give 100% protection — a portion of traffic can be redirected with no referrer. However, we admit this could have been done before as a precaution. Thus, we have updated our internal policies accordingly.”
Adsterra said the malicious account didn’t received its payment due, but the company couldn’t say whether or not the fraudulent accounts operated by Master134 had ever received payment from the company.
SearchSecurity requested more information about the accounts and the steps Adsterra took to stop the malicious activity on its websites. The ad network responded with information similar to what it previously tweeted but did not address those questions directly.
“The executive team has been notified of this issue,” Adsterra support team wrote. “However, we find this case closed and the new account has been banned as well.”
According to RiskIQ’s PassiveTotal, the connections from Master134 to the 7fkm2r4pzi.com domain ended on Sept. 14, the same day as the above email. Adsterra hasn’t responded to further requests from SearchSecurity.
Adsterra’s prevention methods questioned
Security vendors in the ad fraud and malvertising prevention market said Adsterra’s method of blacklisting the IP address is a largely useless approach and that stronger measures are needed to stop threat actors like Master134.
Hagai Shechter, CEO of Fraudlogix, an ad fraud prevention vendor based in Hallandale Beach, Fla., said restricting IP addresses via HTTP headers isn’t effective because — as Adsterra itself pointed out — threat actors can remove malicious IP addresses from their headers and make HTTP requests with “no-referrer.” In addition, Schechter said public blacklists, even if implemented effectively at the firewall level, are often outdated.
“It’s rare to find a publicly available IP blacklist list that’s going to be recent and that will have the good stuff in there,” he said.
It’s also unclear why Adsterra’s additional investment in ad security and new scanners didn’t prevent the Master134 IP address from repeatedly connecting to the ad network’s domains, given the address was known to be malicious. According to a July blog post titled “We Keep You Safe,” Adsterra said it felt “bound to take action” and announced it had added a second ad security scanner from a vendor called AdSecure to further reduce fraud and malvertising.
However, AdSecure was launched in 2017 by a company called ExoGroup, based in Barcelona. ExoGroup is also the parent company of ad network ExoClick that, like Adsterra, was implicated in the Master134 campaign in 2018, as well as previous malvertising campaigns. According to AdSecure’s website, the company’s “partners” include several ad networks including ExoClick, Adsterra and AdKernel, which was also connected to the Master134 campaign.
SearchSecurity reached out to AdSecure to learn more about how its flagship product worked and its relationship to ExoClick and the other ad networks. The company did not respond. [UPDATE: Adsecure emailed a statement to SearchSecurity the day after this article was published. The statement is contained below.]
SearchSecurity spoke with GeoEdge, the other ad security vendor used by Adsterra, which declined to address the ad network directly. GeoEdge CEO Amnon Siev said that in general, some ad network clients choose to essentially ignore the alerts that GeoEdge provides about malicious activity and allow suspicious traffic and IP addresses on their platforms.
Schechter agreed and said clients have full control over how they use Fraudlogix’s products and some simply choose to look the other way when it comes to signs of click fraud and malvertising.
“That absolutely happens,” he said. “The fuel for the industry is volume. If Google blocks out 10% of their ad traffic, they can still survive, but when you’re a smaller network, that 10% could be the difference between staying in business or not.”
Siev added that he believes AdSecure isn’t an effective solution for preventing ad fraud and malvertising. “I’ve never tested their solution,” he said, “but I know from talking to customers that have switched from them to us what gaps are there.”
He also criticized AdSecure’s connection to ExoClick. “We continue to flag many of [ExoClick’s] campaigns,” Siev said. “They’ve pushed back on us and say there’s no malicious activity in their campaigns.”
In a statement sent to SearchSecurity on Nov. 1, Adsecure sales manager Bryan Taylor wrote “AdSecure is a reporting tool, what clients do with those reports and the measures they implement to prevent fraudulent actors is their decision.
AdSecure is part of Exogroup and is born out of the experience that ExoClick has dealing with advertising fraud. ExoClick has been fighting advertising fraud since 2006 and has used the services of GeoEdge and others over the years. Unfortunately, most of these companies rely on outdated technology and they have proven inefficient to detect many types of fraud, especially the most recent ones, such as push lockers. This triggered Exogroup to invest into the development of a new technology, that would address the wide scope of issues that plague the online advertising ecosystem today,” Taylor wrote.
There is no silver bullet to address the issue of malvertising. And there is no such thing as 100% safe. There is a very good reason why people setup an alarm system in their home. But even then, some more ambitious criminals might still break a window and give it a go. Do platforms and networks have issues with malicious activity? Yes, absolutely. And GeoEdge, RiskIQ, AdSecure or any others would not exist if that was not the case,” Taylor added. “If we refer to your quote from Amnon Siev, he admits himself “I’ve never tested their solution” so we don’t think this even deserves a response. What matters to us are the results that the partners get from AdSecure, and the hundreds of malvertising issues that we prevent on a daily basis. And all of the companies fighting this fight are good companies to have on the market.”
It’s unclear if other Adsterra domains are connecting to Master134; the 184.108.40.206 IP address connects to thousands of domains, including a litany of WordPress sites as well as several ad network platforms, and Adsterra owns and operates a significant number of domains. For example, MyIP.ms, an online database of websites and IP addresses, shows more than 400 domains owned by Ad Market Limited, the corporate name of Adsterra.
LAS VEGAS — Despite Google’s own Project Zero being part of the discovery team for the Meltdown and Spectre vulnerabilities, Google itself wasn’t notified until 45 days after the initial report was sent to Intel, AMD and ARM.
Speaking at a panel on Meltdown and Spectre disclosure at Black Hat 2018 Wednesday, Matt Linton, senior security engineer and self-described “chaos specialist” at Google’s incident response team, explained how his company surprisingly fell through the cracks when it came time for the chip makers to notify OS vendors about the vulnerabilities.
“The story of Google’s perspective on Meltdown begins with both an act of brilliance and an act of extraordinary miscommunication, which is a real part of how incident response works,” Linton said during the session, titled “Behind the Speculative Curtain: The True Story of Fighting Meltdown and Spectre.”
Even though Project Zero researcher Jann Horn was part of both the Meltdown and Spectre discovery teams, Linton said, Project Zero never notified Google directly. Instead, the Project Zero group followed strict guidelines for responsible vulnerability disclosure and only notified the “owners” of the bugs, namely the chip makers.
“They feel very strongly in PZ [Project Zero] about being consistent about who they notify and rebuffing criticism that Project Zero gives Google early heads up about bugs and things,” Linton said. “I assure they did not.”
Project Zero notified Intel and the other chip makers about the vulnerabilities on June 1, 2017. It had been previously reported that Google’s incident response team wasn’t looped into the Meltdown and Spectre disclosure process until July, but it wasn’t entirely clear why that was. Linton explained what happened.
“[Project Zero] notified Intel and the other CPU vendors of these speculative execution vulnerabilities and they said a third of the way through the email that ‘We found these, here are the proof of concepts, and by the way, we haven’t told anyone else about this including Google, and it’s now your responsibility to tell anyone you need to tell,’ and somewhere along the line they missed that piece of the email,” he told the audience.
Linton said the CPU vendors began the Meltdown and Spectre disclosure process and started notifying companies that needed to know such as Microsoft, but they apparently believed Google had already been informed because Project Zero was part of the discovery teams. As a result, Google was left out of early stage of the coordinated disclosure process.
“As an incident responder, I didn’t find out about this until mid-July, 45 days after [the chip vendors] discovered it,” Linton said.
The miscommunication regarding Google was just one of several issues that plagued the massive coordinated disclosure effort for Meltdown and Spectre. The panelists, which included Eric Doerr, general manager of the Microsoft Security Response Center, and Christopher Robinson, principal program manager and team lead of Red Hat Product Security Assurance, discussed the ups and down of the complex, seven-month process as well as advice for security researchers and vendors based on their shared experiences.
Editor’s note: Stay tuned for more from this panel on the Meltdown and Spectre disclosure process.
Researchers developed a new proof-of-concept attack on Spectre variant 1 that can be performed remotely, but despite the novel aspects of the exploit, experts questioned the real-world impact.
Michael Schwarz, Moritz Lipp, Martin Schwarzl and Daniel Gruss, researchers at the Graz University of Technology in Austria, dubbed their attack “NetSpectre” and claim it is the first remote exploit against Spectre v1 and requires “no attacker-controlled code on the target device.”
“Systems containing the required Spectre gadgets in an exposed network interface or API can be attacked with our generic remote Spectre attack, allowing [it] to read arbitrary memory over the network,” the researchers wrote in their paper. “The attacker only sends a series of crafted requests to the victim and measures the response time to leak a secret value from the victim’s memory.”
Gruss wrote on Twitter that Intel was given ample time to respond to the team’s disclosure of NetSpectre.
We informed Intel (and also other industry players) very early in the process on March 20. That was more than 120 days before the disclosure. We’re not happy with the situation either, but at some point customers deserve to know what they’re up to.
Gruss went on to criticize Intel for not designating a new Common Vulnerabilities and Exposures (CVE) number for NetSpectre, but an Intel statement explained the reason for this was because the fix is the same as Spectre v1.
“NetSpectre is an application of Bounds Check Bypass (CVE-2017-5753) and is mitigated in the same manner — through code inspection and modification of software to ensure a speculation-stopping barrier is in place where appropriate,” an Intel spokesperson wrote via email. “We provide guidance for developers in our whitepaper, ‘Analyzing Potential Bounds Check Bypass Vulnerabilities,’ which has been updated to incorporate this method. We are thankful to Michael Schwarz, Daniel Gruss, Martin Schwarzl, Moritz Lipp and Stefan Mangard of Graz University of Technology for reporting their research.”
Jake Williams, founder and CEO of Rendition Infosec, agreed with Intel’s assessment and wrote by Twitter direct message that “it makes sense that this wouldn’t get a new CVE. It’s not a new vulnerability; it’s just exploiting an existing vulnerability in a new way.”
The speed of NetSpectre
Part of the research that caught the eye of experts was the detail that when exfiltrating memory, “this NetSpectre variant is able to leak 15 bits per hour from a vulnerable target system.”
Kevin Beaumont, a security architect based in the U.K., explained on Twitter what this rate of exfiltration means.
For the record, if you were ever actually be able to exploit it in real world (big if) it gives 15 bits of information per hour. There’s 8000000000 bits in 1gb. So only 60822 years to extract 1gb of RAM.
Williams agreed and said that although the NetSpectre attack is “dangerous and interesting,” it is “not worth freaking out about.”
“The amount of traffic required to leak meaningful amounts of data is significant and likely to be noticed,” Williams wrote. “I don’t think attacks like this will get significantly faster. Honestly, the attack could leak 10 to 100 times faster and still be relatively insignificant. Further, when you are calling an API remotely and others call the same API, they’ll impact timing, reducing the reliability of the exploit.”
Gruss wrote by Twitter direct message that since an attacker can use NetSpectre to choose an arbitrary address in memory to read, the impact of the speed of the attack depends on the use case.
“Remotely breaking ASLR (address space layout randomization) within a few hours is quite nice and very practical,” Gruss wrote, adding that “leaking the entire memory is of course completely unrealistic, but this is also not what any attacker would want to do.”
Ha, I know what you mean – I don’t “need” my kit either (it’s 100% essential though)!
The white edging isn’t a distraction at all. I actually like it. It’s especially nice if backing on to a light coloured wall. The picture’s nice and bright so you don’t notice the bezel while in use. Besides, given it’s a 32″ screen, it takes up most of my field of view when I’m at my desk. I play FPS games, so I bought it partly for that. It was awesome for gaming (although not quite as good as my 65″ OLED, hence the lack of use). The design also looks great in my office – I’m almost tempted to keep it as an ornament even if I rarely use it!
Tbh, I priced it at £200 to give me room to drop the price, but I’d never intended on going that low. £170 + delivery (or including uninsured delivery) is as far as I can bend. It’s a hell of a lot if monitor for that money.
Despite the recent turmoil in the SSL certificate market, Comodo CA’s new leadership believes the space presents a wealth of opportunities.
Private equity firm Francisco Partners last fall acquiredComodo CA, the certificate authority arm of Comodo. Francisco Partners appointed Bill Conner, president and CEO of SonicWall, as chairman of Comodo CA, and named Bill Holtz, former COO of certificate authority Entrust and former CIO of Expedia, as the company’s new chief executive, replacing former CEO and founder Melih Abdulhayoglu. Now the two are tasked with expanding Comodo’s business in areas like the internet of things.
In part one of the conversation with SearchSecurity, Conner and Holtz discussed the struggles of Symantec’s certificate authority, the harsh actions taken by Google and Mozilla to correct those issues, and the effect they had on the overall market. In part two, they discuss the competition landscape in the certificate space and the opportunities presented by IoT certificates. Here is part two of the discussion with Conner and Holtz:
How competitive is the certificate industry today as opposed to maybe 10 years when there were more players?
Bill Conner: It was a much more fractured industry in the past, and there have been a lot of changes; you start to create new certificates, going from domain validation [DV] to organization validation [OV] and now extended validation [EV], over the last few years and have code signing and digital signatures, and then you go from RSA [cryptography] to elliptical. The industry was consolidated with a lot of mergers and acquisitions under Symantec. At that point, there were very few people that had the root keys in the browsers. In one sense, there was less competition back then because there wasn’t enough space in the market to have it. Increasingly, as you could have more certificates, a lot of people started entering the space. And there was fallout from that because some companies couldn’t handle the lifecycle and others got cannibalized. Some of those brands survived under other companies like Symantec. If you look at the market today, there are a lot more people playing like Let’s Encrypt and others around the world at the low end. At the high end, it’s a smaller group: GlobalSign, DigiCert and Comodo. So there’s less players, but more competition. And in light of the latest episode with Symantec, if you have to move to new certificates, you’re probably going to look at other [CA] options. So there may be more competitive activities today than there used to be.
Bill Connerchairman, Comodo CA
I’ll also say that with net-new IoT certificates for connected devices and code signing and other areas, we’re going to have more and more certificates, but they’re going to get bent to do new things. It won’t just be for authentication. It might be for non-repudiation or digital signatures. I think that is going to morph as networks, cloud services, mobile devices and applications reshape themselves in the next five to 10 years.
Bill Holtz: There’s definitely competition out there, but I think some people are pigeonholing themselves. Look at Let’s Encrypt, for example, in the DV space. A lot of people that did not have certificates before are using them, but there are limitations. They’re 90-day certificates, they don’t cover all of the legacy servers that you may have in your enterprise, and they don’t come with support. But for the market segment they’re in, they’re serving a purpose. And it is getting the web encrypted; if you look at the number of HTTPS pages on the web, it’s increased dramatically. And that plays to the industry’s advantage because it raises awareness. But Let’s Encrypt doesn’t do OV or EV. So there is some competition, but I think our path is pretty well laid out for us.
You said trust in the certificate authority business today has taken a big hit. What’s the appeal of getting into this business today, and where do you want to take Comodo CA as a certificate authority?
Holtz: The appeal is it’s a healthy business that generates a lot of cash, and it provides an important service to the internet. The internet can’t run without certificates. There’s been a discussion now for over a decade about how SSL and PKI are going to go away, but all I see are SSL and PKI continuing to thrive. Certificates are going to continue to grow. In fact, with IoT devices, now you have certificates going everywhere. It’s a great business to be in, and it has a lot of growth potential in different areas. What you have to start looking at is complete certificate lifecycle management. It’s not just about issuing the certificate. I think customers are looking for help for this complete lifecycle management, whether it’s finding out what certificates you have, how you maintain them and how you renew them. And when you apply that even further to IoT devices, it’s a really exciting space to be in.
Conner: If you look at the technology landscape, apps are talking to apps and devices are talking to apps. You also have the cloud, so instead of the old way of endpoints talking to endpoints, you have endpoints talking to cloud services. There’s not a place for people in those areas. Those are going to morph. Those won’t be classic 509 certificates as you used to think of them. And at the core of everything is the basis of trust and how to validate it and create handshakes for it. You can do public trusted and you can also do non-publicly trusted. I think the new world will have a hybrid of those approaches as these new applications and networks are formed. And by the way, the traditional business is still pretty attractive for someone like Comodo to pick up some market share and some financial opportunities while helping to drive some of those new capabilities and new markets.
Given all the struggles we’ve seen with different certificate authorities in recent years, do you feel the certificate business is a challenging one?
Conner: It is for the layperson because it’s not well understood. Certificates are pretty basic, but when you get into what you have to do with root certificates and managing them, then you’ve got to have expertise. That’s your secret sauce as a certificate authority. And ultimately that’s [the] exciting thing that Melih [Abdulhayoglu] and I saw, and Bill [Holtz] ultimately saw as well. The expertise that [Abdulhayoglu] had and what [Holtz] and I brought make a very interesting combination of talent that I don’t think existed in this space today.
Holtz: I’d say it is a hard business from the standpoint that you have to have the right level of intellectual capability in the executive team running the business. We saw Symantec leaning more and more on their partners and letting other people do things, and we saw what happened there. You have to pay attention to what you’re doing. You can’t be issuing rogue certificates. There are a lot of things you have to be doing, and doing them well, every day in this business. There’s little room for error. So yes, it’s a hard business, but as Bill [Conner] said, we’re starting with a great base here at Comodo, and we’re attracting some of the best talent that we know in this market so we can take the business to the next level.