Tag Archives: Security

Silver Peak SD-WAN adds service chaining, partners for cloud security

Silver Peak boosted its software-defined WAN security for cloud-based workloads with the introduction of three security partners.

Silver Peak Unity EdgeConnect customers can now add security capabilities from Forcepoint, McAfee and Symantec for layered security in their Silver Peak SD-WAN infrastructure, the vendor said in a statement. The three security newcomers join existing Silver Peak partners Check Point, Fortinet, OPAQ Networks, Palo Alto Networks and Zscaler.

Silver Peak SD-WAN allows customers to filter application traffic that travels to and from cloud-based workloads through security processes from third-party security partners. Customers can insert virtual network functions (VNFs) through service chaining wherever they need the capabilities, which can include traffic inspection and verification, distributed denial-of-service protection and next-generation firewalls.

These partnership additions build on Silver Peak’s recent update to incorporate a drag-and-drop interface for service chaining and enhanced segmentation capabilities. For example, Silver Peak said a typical process starts with customers defining templates for security policies that specify segments for users and applications. This segmentation can be created based on users, applications or WAN services — all within Silver Peak SD-WAN’s Unity Orchestrator.

Once the template is complete, Silver Peak SD-WAN launches and applies the security policies for those segments. These policies can include configurations for traffic steering, so specific traffic automatically travels through certain security VNFs, for example. Additionally, Silver Peak said customers can create failover procedures and policies for user access.

Enterprises are increasingly moving their workloads to public cloud and SaaS environments, such as Salesforce or Microsoft Office 365. Securing that traffic — especially traffic that travels directly over broadband internet connections — remains top of mind for IT teams, however. By service chaining security functions from third-party security companies, Silver Peak SD-WAN customers can access those applications more securely, the company said.

Silver Peak SD-WAN holds 12% of the $162 million SD-WAN market, according to a recent IHS Markit report, which ranks the vendor third after VMware-VeloCloud and Aryaka.

ONF pinpoints four technology areas to develop

The Open Networking Foundation unveiled four new supply chain partners that are working to develop technology reference designs based on ONF’s strategic plan. Along with the four partners — Adtran, Dell EMC, Edgecore Networks and Juniper Networks — ONF finalized the focus areas for the initial reference designs.

ONF’s reference designs provide blueprints to follow while building open source platforms that use multiple components, the foundation said in a statement. While the broad focus for these blueprints looks at edge cloud, ONF targeted four specific technology areas:

  • SDN-enabled broadband access. This reference design is based on a variant of the Residential Central Office Re-architected as a Datacenter project, which is designed to virtualize residential access networks. ONF’s project likewise supports virtualized access technologies.
  • Network functions virtualization fabric. This blueprint develops work on leaf-spine data center fabric for edge applications.
  • Unified programmable and automated network. ONF touts this as a next-generation SDN reference design that uses the P4 language for data plane programmability.
  • Open disaggregated transport network. This reference design focuses on open multivendor optical networks.

Adtran, Dell EMC, EdgeCore and Juniper each apply its own technology expertise to these reference design projects, ONF said. Additionally, as supply chain partners, they’ll aid operators in assembling deployment environments based on the reference designs.

Inside the private event where Microsoft, Google, Salesforce and other rivals share security secrets

Speaking this week on the Microsoft campus, L-R: Erik Bloch, Salesforce security products and program management director; Alex Maestretti, engineering manager on the Netflix Security Intelligence and Response Team; David Seidman, Google security engineering manager; and Chang Kawaguchi, director for Microsoft Office 365 security. (GeekWire Photos / Todd Bishop)

REDMOND, Wash. — At first glance, the gathering inside Building 99 at Microsoft this week looked like many others inside the company, as technical experts shared hard-earned lessons for using machine learning to defend against hackers.

Ram Shankar Siva Kumar, Microsoft security data wrangler, spearheaded the event.

It looked normal, that is, until you spotted the person in the blue Google shirt addressing the group, next to speakers from Salesforce, Netflix and Microsoft, at a day-long event that included representatives of Facebook, Amazon and other big cloud providers and services that would normally treat technical insights as closely guarded secrets.

As the afternoon session ended, the organizer from Microsoft, security data wrangler Ram Shankar Siva Kumar, complimented panelist Erik Bloch, the Salesforce security products and program management director, for “really channeling the Ohana spirit,” referencing the Hawaiian word for “family,” which Salesforce uses to describe its internal culture of looking out for one another.

It was almost enough to make a person forget the bitter rivalry between Microsoft and Salesforce.

Siva Kumar then gave attendees advice on finding the location of the closing reception. “You can Bing it, Google it, whatever it is,” he said, as the audience laughed at the rare concession to Microsoft’s longtime competitor.

It was no ordinary gathering at Microsoft, but then again, it’s no ordinary time in tech. The Security Data Science Colloquium brought the competitors together to focus on one of the biggest challenges and opportunities in the industry.

Machine learning, one of the key ingredients of artificial intelligence, is giving the companies new superpowers to identify and guard against malicious attacks on their increasingly cloud-oriented products and services. The problem is that hackers are using many of the same techniques to take those attacks to a new level.

Dawn Song, UC Berkeley computer science and engineering professor.

“The challenge is that security is a very asymmetric game,” said Dawn Song, a UC Berkeley computer science and engineering professor who attended the event. “Defenders have to defend across the board, and attackers only need to find one hole. So in general, it’s easier for attackers to leverage these new techniques.”

That helps to explain why the competitors are teaming up.

“At this point in the development of this technology it’s really critical for us to move at speed to all collaborate,” explained Mark Russinovich, the Microsoft Azure chief technology officer. “A customer of Google is also likely a customer of Microsoft, and it does nobody any good or gives anybody a competitive disadvantage to keep somebody else’s customer, which could be our own customer, insecure. This is for the betterment of everybody, the whole community.”

[Editor’s Note: Russinovich is a keynoter at the GeekWire Cloud Tech Summit, June 27 in Bellevue, Wash.]

This spirit of collaboration is naturally more common in the security community than in the business world, but the colloquium at Microsoft has taken it to another level. GeekWire is the first media organization to go inside the event, although some presentations weren’t opened up to us, due in part to the sensitive nature of some of the information the companies shared.

The event, in its second year, grew out of informal gatherings between Microsoft and Google, which resulted in part from connections Siva Kumar made on long-distance runs with Google’s tech security experts. After getting approval from his manager, he brought one of the Google engineers to Microsoft two years ago to compare notes with his team.

The closing reception for the Security Data Science Colloquium at Microsoft this week. (GeekWire Photo / Todd Bishop)

Things have snowballed from there. After the first event, last year, Siva Kumar posted about the colloquium, describing it as a gathering of “security data scientists without borders.” As the word got out, additional companies asked to be involved, and Microsoft says this year’s event was attended by representatives of 17 different tech companies in addition to university researchers.

The event reflects a change in Microsoft’s culture under CEO Satya Nadella, as well as a shift in the overall industry’s approach. Of course, the companies are still business rivals that compete on the basis of beating each other’s products. But in years or decades past, many treated security as a competitive advantage, as well. That’s what has changed.

“This is not a competing thing. This is not about us trying to one up each other,” Siva Kumar said. “It just feels like, year over year, our problems are just becoming more and more similar.”

Siamac Mirzaie of Netflix presents at the event. (GeekWire Photo / Todd Bishop)

In one afternoon session this week, representatives from Netflix, one of Amazon Web Services’ marquee customers, gave detailed briefings on the streaming service’s internal machine learning tools, including its “Trainman” system for detecting and reporting unusual user activity.

Developing and improving the system has been a “humbling journey,” said Siamac Mirzaie from the Netflix Science & Analytics Team, before doing a deep dive on the technical aspects of Trainman.

Depending on the situation, he said, Netflix uses either Python, Apache Spark or Flink to bring the data into its system and append the necessary attributes to the data. It then uses simple rules, statistical models and machine learning models to detect anomalies using Flink or Spark, followed by a post-processing layer that uses a combination of Spark and Node.js. That’s followed by a program for visualizing the anomalies in a timeline that people inside the company can use to drill down into and understand specific events.

“The idea is to refine the various data anomalies that we’ve generated in the previous stage into anomalies that our application owner or security analyst can actually relate to,” Mirzaie said.

The stakes are high given the $8 billion that Netflix is expected to spend on content this year.

But the stakes might be even higher for Facebook. The social network, which has been in the international spotlight over misuse of its platform by outside companies and groups, says it uses a combination of automated and manual systems to identify fraudulent and suspicious activity.

Facebook, which held a similar event of its own in April, was among the companies that presented during the gathering at Microsoft this week. Facebook recently announced that it used new machine learning practices to detect more than 500,000 accounts tied to financial scams.

Mark Russinovich, Microsoft Azure CTO, in his conference room on the company’s Redmond campus this week. (GeekWire Photo / Todd Bishop)

During his keynote, Microsoft’s Russinovich talked in detail about Windows PowerShell, the command-line program that is a popular tool for attackers in part because it’s built into the system. Microsoft’s Windows Defender Advanced Threat Protection is designed to detect suspicious command lines, and Microsoft was previously using a traditional model that was trained to recognize potentially malicious sequences of characters.

“That only got us so far,” Russinovich said in an interview.

After brainstorming ways to solve the problem, the company’s security defense researchers figured out how to apply deep neural networks, more commonly used in vision-based object detection, for use in PowerShell malicious script detection, as well. They essentially came up with a way to encode command lines to make them look like images to the machine learning model, Russinovich explained. The result surpassed the traditional technique “by a significant amount,” he said.

At the closing panel discussion, David Seidman, Google security engineering manager, summed up the stated philosophy of the event. “We are not trying to compete on the basis of our corporate security,” Seidman said. “Google is not trying to get ahead of Microsoft in the cloud because Microsoft got compromised. That’s the last thing we want to see.”

“We are fighting common enemies,” Seidman added. “The same attackers are coming after all of us, and an incident at one company is going to affect that customer’s trust in all the cloud companies they do business with. So we have very much aligned interests here.”

Hybrid cloud security architecture requires rethinking

Cloud security isn’t for the squeamish. Protecting cloud-based workloads and designing a hybrid cloud security architecture has become a more difficult challenge than first envisioned, said Jon Oltsik, an analyst at Enterprise Strategy Group in Milford, Mass.

“The goal was simple,” he said. Enterprises wanted the same security they had for their internal workloads to be extended to the cloud.

But using existing security apps didn’t work out so well. In response, enterprises tried to concoct their own, but that meant the majority of companies had separate security foundations for their on-premises and cloud workloads, Oltsik said.

The answer in creating a robust hybrid cloud security architecture is central policy management, where all workloads are tracked, policies and rules applied and networking components displayed in a centralized console. Firewall and security vendors are beginning to roll out products supporting this strategy, Oltsik said, but it’s still incumbent upon CISOs to proceed carefully.

“The move to central network security policy management is a virtual certainty, but which vendors win or lose in this transition remains to be seen.”

Read the rest of what Oltsik had to say about centralized cloud security.

User experience management undergoing a shift

User experience management, or UEM, is a more complex concept than you may realize.

Dennis Drogseth, an analyst at Enterprise Management Associates in Boulder, Colo., described the metamorphosis of UEM, debunking the notion that the methodology is merely a subset of application performance management.

Instead, Drogseth said, UEM is multifaceted, encompassing application performance, business impact, change management, design, user productivity and service usage.

According to EMA research, over the last three years the two most important areas for UEM is application performance and portfolio planning and optimization. Valuable insights can be provided by UEM to assist both IT and business.

One question surrounding UEM is whether it falls into the realm of IT or business. In years past EMA data suggested 20% of networking staffers considered UEM a business concern, 21% an IT concern and 59% said UEM should be equally an IT and business concern. Drogseth agreed wholeheartedly with the latter group.

Drogseth expanded on the usefulness of UEM in his blog, including how UEM is important to DevOps and creating an integrated business strategy.

Mixed LPWAN results, but future could be bright

GlobalData analyst Kitty Weldon examined the evolving low-power WAN market in the wake of the 2018 annual conference in London.

Mobile operators built out their networks for LPWAN in 2017, Weldon said,  and are now starting to look for action. Essentially every internet of things (IoT) service hopped on the LPWAN bandwagon; now they await the results.

So far, there have been 48 launches by 26 operators.

The current expectation remains lowered costs and improved battery life will eventually usher in thousands of new low-bandwidth IoT devices connecting to LPWANs. However, Weldon notes that it’s still the beginning of the LPWAN era, and right now feelings are mixed.

“Clearly, there is some concern in the industry that the anticipated massive uptake of LPWANs will not be realized as easily as they had hoped, but the rollouts continue and optimism remains, tempered with realistic concerns about how best to monetize the investments.”

Read more of what Weldon had to say here.

New MalwareTech indictment adds four more charges

The court saga of Marcus Hutchins, a security researcher from England also known as MalwareTech, will continue after a superseding indictment filed by the U.S. government added new charges to his case.

Hutchins was originally arrested in August 2017 on charges of creating and distributing the Kronos banking Trojan. The superseding MalwareTech indictment, filed on Wednesday, adds four new charges to the original six, including the creation of the UPAS kit malware, conspiracy to commit wire fraud, and lying to the FBI.

Hutchins first gained prominence in May 2017 for being one of the researchers who helped slow the spread of the WannaCry ransomware, and he recently mused on Twitter at the connection between that act and the new MalwareTech indictment.

Hutchins also had strong language to describe the supplemental indictment, but one of his lawyers, Brian Klein was more measured.

A question about the new MalwareTech indictment

The UPAS Kit described in the new filing was a form grabber that Hutchins admitted to creating, but he asserted it was not connected to Kronos. Marcy Wheeler, national security and civil liberties expert, questioned how this was included in the new MalwareTech indictment because of the time frames related to those charges.

The indictment noted that the UPAS Kit was originally sold and distributed in July 2012 and it alleged Hutchins developed Kronos “prior to 2014” and supplied it to the individual who sold the UPAS Kit. However, Wheeler pointed out in a blog post that there should be a five year statute of limitations related to such charges and even if the government could avoid that, Hutchins would have been a minor in 2012 when these actions allegedly took place.

Additionally, Wheeler noted that Hutchins admitted to creating the UPAS form grabber — although he denied it was part of Kronos — when he was first arrested by the FBI. The new MalwareTech indictment claims Hutchins lied to the FBI about creating Kronos which would put into question the new charge that Hutchins lied to the FBI.

Apple iOS 12 USB Restricted Mode to foil thieves, law enforcement

A security feature that had popped up in beta versions of Apple’s iOS software appears to be coming in earnest as part of iOS 12, and it will protect devices against anyone trying to unlock them via USB.

USB Restricted Mode is described in the iOS 12 settings as the option to enable or deny the ability to “unlock [an] iPhone to allow USB accessories to connect when it has been more than an hour since your iPhone was locked.” In practice, this means a device will require a passcode unlock in order to connect any Lightning-to-USB accessory after the one-hour time limit has passed.

Apple didn’t mention USB Restricted Mode during the keynote at its Worldwide Developers Conference on Monday, but developers saw it in the iOS 12 preview, which was released that same day. The setting is on by default and covers any type of security on an iOS device — Touch ID, Face ID and passcode.

Experts noted USB Restricted Mode will protect users’ data if a device is stolen, but it will also deny law enforcement from using unlocking services from companies like GrayKey and Cellebrite — the latter of which was rumored to have helped the FBI unlock the San Bernardino, Calif., shooter’s iPhone.

Earlier tests of USB Restricted Mode had allowed for a one-week time limit, spurring GrayKey to reportedly alert customers of this feature when it surfaced in the iOS 11.3 beta, according to internal email messages obtained by Motherboard. A one-hour time limit could effectively make it impossible for customers to get the device to a company like GrayKey in time to gain brute-force access.

Rusty Carter, vice president of product management at Arxan, based in San Francisco, said USB Restricted Mode “is really about increasing the security of the device.”

If the device is vulnerable to brute-force attacks via wired connection, other security features, like being able to wipe the device after 10 unsuccessful authentication attempts, are rendered useless.
Rusty Cartervice president of product management at Arxan

“If the device is vulnerable to brute-force attacks via wired connection, other security features, like being able to wipe the device after 10 unsuccessful authentication attempts, are rendered useless … they are effectively a false sense of security,” Carter wrote via email. “Effectively, any data is vulnerable, unless the individual app developer has done the right thing both to secure and encrypt user data and require more than stored credentials or identity to access the data with their app, which is rarely the case today.”

John Callahan, CTO of Veridium, based in Quincy, Mass., said, as a developer, his initial reaction to USB Restricted Mode was, “Great, now I’ll have to unlock the phone every time I go to debug a mobile app with Xcode.” But he later realized it could have protected a lot of stolen devices if it had been implemented in an earlier version of iOS.

“USB Restricted Mode in iOS 12 a big win for users, because we are keeping more personally identifiable information on our mobile devices, including healthcare, identification and biometric data. Our phones have become our digital wallets, and we expect a maximum level of privacy and convenience,” Callahan wrote via email. “Android devices, ironically seen as less secure, have long required unlocking when connected in USB Debug mode. In many ways, Apple is playing catch-up with respect to physical device security.”

SS7 vulnerabilities enable breach of major cellular provider

The U.S. Department of Homeland Security warned of an exploit of the Signaling System 7 protocol that may have targeted American cellphone users.

The Washington Post reported that DHS notified Sen. Ron Wyden (D-Ore.) last week that malicious actors “may have exploited” global cellular networks “to target the communications of American citizens.” The letter has not been made public, but The Washington Post obtained a copy of it and reported that it described surveillance systems that exploit Signaling System 7 (SS7) vulnerabilities. According to the report, the exploit enables intelligence agencies and criminal groups to spy on targets using nothing but their cellphone number.

SS7 is the international telecommunications standard used since the 1970s by telecommunications providers to exchange call routing information in order to set up phone connections. Cellphone providers use SS7 to enable users to send and receive calls as they move from network to network anywhere in the world. The protocol has been criticized by analysts and experts for years because of its vulnerabilities and because it enables spying and data interception.

In a different letter to Ajit Pai, chairman of the Federal Communications Commission, Wyden referenced an “SS7 breach” at a major wireless carrier and criticized the FCC for its inaction regarding SS7 vulnerabilities.

“Although the security failures of SS7 have long been known to the FCC, the agency has failed to address the ongoing threat to national security and to the 95% of Americans who have wireless service,” Wyden wrote.

He explained the SS7 vulnerabilities enable attackers to intercept people’s calls and texts, as well as hack into phones to steal financial information or get location data.

“In a prior letter to me, you dismissed my request for the FCC to use its regulatory authority to force the wireless industry to address the SS7 vulnerabilities,” Wyden wrote to Pai. “You cited the work of the [Communications Security, Reliability and Interoperability Council] as evidence that the FCC is addressing the threat. But neither CSRIC nor the FCC have taken meaningful action to protect hundreds of millions of Americans from potential surveillance by hackers and foreign governments.”

In the letter, Wyden included a call to action for Pai to use the FCC’s “regulatory authority” to address the security issues with SS7 and to disclose information about SS7-related breaches to Wyden by July 9, 2018.

In other news:

  • The U.S. government ban on using Kaspersky Lab products was upheld this week, and the security company’s lawsuits were dismissed. U.S. District Judge Colleen Kollar-Kotelly dismissed two lawsuits filed by Kaspersky Lab in response to Binding Operational Directive 17-01 and the National Defense Authorization Act (NDAA), both of which banned the company’s products from use in the federal government. Kaspersky argued the ban was unconstitutional and caused undue harm to the company, but Kollar-Kotelly dismissed the argument and said while there may be “adverse consequences” for Kaspersky, the ban is not unconstitutional. Kaspersky Lab has said it will file an appeal of the ruling.
  • The U.S. House of Representatives advanced a bill that would require law enforcement to get a warrant before collecting data from email providers. The Email Privacy Act was added as an amendment to the NDAA, which is the annual budget for the Department of Defense. The bill passed the House 351-66 and will now move to the Senate for approval. The amendment was authored by Rep. Kevin Yoder (R-Kan.) and is the latest version of the 2016 Email Privacy Act that received unanimous support in the House. If the NDAA passes with this amendment included, it will provide warrant protections to all email, chats and online messages that law enforcement might want or need for investigations. The Electronic Frontier Foundation has been a proponent of email privacy in law, saying, “The emails in your inbox should have the same privacy protections as the papers in your desk.”
  • The private equity investment firm Thoma Bravo is acquiring a majority share in the security company LogRhythm. LogRhythm offers its users a security information and event management platform that also has user and entity behavior analytics features. The company has been in business for 15 years and has more than 2,500 customers worldwide. “LogRhythm believes it has found an ideal partner in Thoma Bravo,” said LogRhythm’s president and CEO, Andy Grolnick, in a statement. “As we seek to take LogRhythm to the next level and extend our position as the market’s preeminent NextGen SIEM vendor, we feel Thoma Bravo’s cybersecurity domain expertise and track record of helping companies drive growth and innovation will make this a powerful and productive relationship.” The deal is expected to close later in 2018. Thoma Bravo owns the certificate authority company DigiCert, which recently purchased Symantec’s CA operations, and has previously invested in other cybersecurity companies, including SonicWall, SailPoint, Hyland Security, Deltek, Blue Coat Systems, Imprivata, Bomgar, Barracuda Networks, Compuware and SolarWinds.

Federal cybersecurity report says nearly 75% of agencies at risk

The latest federal cybersecurity report holds little good news regarding the security posture of government agencies, and experts are not surprised by the findings.

The Office of Management and Budget (OMB) and the Department of Homeland Security (DHS) developed the report in accordance with President Donald Trump’s cybersecurity executive order issued last year. The report acknowledged the difficulties agencies face in terms of budgeting, maintaining legacy systems and hiring in the face of the cybersecurity skills gap, and it identified 71 of 96 agencies as being either “at risk or high risk.”

“OMB and DHS also found that federal agencies are not equipped to determine how threat actors seek to gain access to their information. The risk assessments show that the lack of threat information results in ineffective allocations of agencies’ limited cyber resources,” OMB and DHS wrote in the report. “This situation creates enterprise-wide gaps in network visibility, IT tool and capability standardization, and common operating procedures, all of which negatively impact federal cybersecurity.”

The federal cybersecurity report tested the agencies involved under 76 metrics and identified four major areas of improvement: increasing threat awareness, standardizing IT capabilities, consolidating security operations centers (SOCs), and improving leadership and accountability.

Greg Touhill, president of Cyxtera Federal Group, based in Coral Gables, Fla., and former CISO for the United States, said the report was an “accurate characterization of the current state of cyber risk and a reflection of the improvements made over the last five years in treating cybersecurity as a risk management issue, rather than just a technology problem.”

“I am concerned that the deletions of and vacancies in key senior cyber leadership positions [are] sending the wrong message about how important cybersecurity is to the government workforce, commercial and international partners, and potential cyber adversaries,” Touhill wrote via email. “As national prosperity and national security are dependent on a strong cybersecurity program that delivers results that are effective, efficient and secure, I believe cybersecurity ought to be at the top of the agenda, and we need experienced cyber leaders sitting at the table to help guide the right decisions.”

Agencies at risk

The federal cybersecurity report said many agencies lack situational awareness and noted this has been a long-standing issue in the U.S. government.

I am concerned that the deletions of and vacancies in key senior cyber leadership positions [are] sending the wrong message about how important cybersecurity is to the government workforce, commercial and international partners, and potential cyber adversaries.
Greg Touhillpresident of Cyxtera Federal Group and former CISO for the United States

“For the better part of the past decade, OMB, the Government Accountability Office, and agency [inspectors general] have found that agencies’ enterprise risk management programs do not effectively identify, assess, and prioritize actions to mitigate cybersecurity risks in the context of other enterprise risks,” OMB wrote. “In fact, situational awareness is so limited that federal agencies could not identify the method of attack, or attack vector, in 11,802 of the 30,899 cyber incidents (38%) that led to the compromise of information or system functionality in [fiscal year] 2016.”

Sherban Naum, senior vice president of corporate strategy and technology at Bromium, based in Cupertino, Calif., said improving information sharing might not “address the protection component.”

“Sharing information in real time of an active and fully identified attack is critical. However, more information alone won’t help if there is no contextual basis to understand what was attacked, what vulnerability was leveraged, the attacker’s intent and impact to the enterprise,” Naum said. “I wonder what systems are in place or are needed to process the real-time threat data to then automatically protect the rest of the federal space.”

Not all of the news was bad. OMB noted that 93% of users in the agencies studied use multifactor authentication in the form of personal identity verification cards. However, the report said this was only the beginning, as “agencies have not matured their access management capabilities” for modern mobile use.

“One of the most significant security concerns that results from the current decentralized and fragmented IT landscape is ineffective identity, credential, and access management processes,” OMB wrote. “Fundamentally, any organization must have a clear understanding of the people, assets, and data on its networks.”

The federal cybersecurity report acknowledged the number of high-profile data leaks and breaches across government systems in recent years and said the situation there is not improving.

“Federal agencies do not have the visibility into their networks to effectively detect data exfiltration attempts and respond to cybersecurity incidents. The risk assessment process revealed that 73 percent of agency programs are either at risk or high risk in this critical area,” OMB wrote. “Specific metrics related to data loss prevention and exfiltration demonstrate even greater problems, with only 40 percent of agencies reporting the ability to detect the encrypted exfiltration of information at government-wide target levels. Only 27 percent of agencies report that they have the ability to detect and investigate attempts to access large volumes of data, and even fewer agencies report testing these capabilities annually.”

Additionally, only 16% of agencies have properly implemented encryption on data at rest.

Suggested improvements

The federal cybersecurity report had suggestions for improving many of the poor security findings, including consolidating email systems, creating standard software configurations and a shared marketplace for software, and improving threat intelligence sharing across SOCs. However, many of the suggestions related directly to following National Institute of Standards and Technology (NIST) Cybersecurity Framework guidelines, the Cyber Threat Framework developed by the Office of the Director of National Intelligence, or DHS’ Continuous Diagnostics and Mitigation (CDM) program.

Katherine Gronberg, vice president of government affairs at ForeScout Technologies, based in San Jose, Calif., said the focus of CDM is on real-time visibility.

“For example, knowing you have 238 deployed surveillance cameras found to have a particular vulnerability is a good example of visibility. Knowing that one or more of those cameras is communicating with high-value IT assets outside of its segment is further visibility, and then seeing that a camera is communicating externally with a known, malicious command-and-control IP address is the type of visibility that helps decision-making,” Gronberg wrote via email. “CDM intends to give agencies this level of real-time domain awareness in addition to securing data. It’s worth noting that many agencies are now moving to Phase 3 of CDM, which is about taking action on the problems that are discovered.”

Katie Lewin, federal director for the Cloud Security Alliance, said “standardization is an effective tool to get the best value from resources,” especially given that many risks faced by government agencies are due to the continued use of legacy systems.

“Standardized, professionally managed cloud systems will significantly help reduce risks and eliminate several threat vectors,” Lewis wrote via email. “If agencies adopt DHS’s Continuous Diagnostics and Mitigation process, they will not have to develop and reinvent custom programs. However, as with all standards, there needs to be some flexibility. Agencies should be able to modify a standard approach within defined limits. Failure to involve agencies in developing a common approach and in defining the boundaries of flexibility will result in limited acceptance and adoption of the common approach.”

Gary McGraw, vice president of security technology at Synopsys Inc., based in Mountain View, Calif., said focusing on standards may not hold much improvement.

“The NIST Framework has lots of very basic advice and is very useful. It would be a step in the right direction. However, it is important to keep in mind that standards generally reflect the bare minimum,” McGraw said. “Organizations that view security solely as a compliance requirement generally fall short, compared to others that treat it as a core or enabling component of their operations.”

Michael Magrath, director of global regulations and standards at OneSpan, said, “Improving resource allocations is a crucial to improving our federal cyberdefenses.” 

“With $5.7 billion in projected spending across federal civilian agencies, some agencies may cry poor. The report notes that email consolidation can save millions of dollars each year, and unless agencies have improved efficiencies like email consolidation, have implemented electronic signatures and migrated to the cloud, there remains an opportunity to reallocate funds to better protect their systems,” Magrath said. “The report also notes that agencies are operating multiple versions of the same software. This adds unnecessary expense, and as more and more agencies migrate to the cloud, efficiencies and cost reductions should follow enabling agencies to reallocate budget and IT resources to other areas.”

What is Windows event log? – Definition from WhatIs.com

The Windows event log is a detailed record of system, security and application notifications stored by the Windows operating system that is used by administrators to diagnose system problems and predict future issues.

Applications and the operating system (OS) use these event logs to record important hardware and software actions that the administrator can use to troubleshoot issues with the operating system. The Windows operating system tracks specific events in its log files, such as application installations, security management, system setup operations on initial startup, and problems or errors.

The elements of a Windows event log

Each event in a log entry contains the following information:

Date: The date the event occurred.

Time: The time the event occurred.

User: The username of the user logged onto the machine when the event occurred.

Computer: The name of the computer.

Event ID: A Windows identification number that specifies the event type.

Source: The program or component that caused the event.

Type: The type of event, including information, warning, error, security success audit or security failure audit.

For example, an information event might appear as:

Information        5/16/2018 8:41:15 AM    Service Control Manager              7036       None

A warning event might look like:

Warning               5/11/2018 10:29:47 AM  Kernel-Event Tracing      1              Logging

By comparison, an error event might appear as:

Error                      5/16/2018 8:41:15 AM    Service Control Manager              7001       None

A critical event might resemble:

Critical   5/11/2018 8:55:02 AM    Kernel-Power    41           (63)

The type of information stored in Windows event logs

The Windows operating system records events in five areas: application, security, setup, system and forwarded events. Windows stores event logs in the C:WINDOWSsystem32config folder.

Application events relate to incidents with the software installed on the local computer. If an application such as Microsoft Word crashes, then the Windows event log will create a log entry about the issue, the application name and why it crashed.

[embedded content]

Configure a centralized Windows Server 2016
event log subscription.

Security events store information based on the Windows system’s audit policies, and the typical events stored include login attempts and resource access. For example, the security log stores a record when the computer attempts to verify account credentials when a user tries to log on to a machine.

Setup events include enterprise-focused events relating to the control of domains, such as the location of logs after a disk configuration.

System events relate to incidents on Windows-specific systems, such as the status of device drivers.

Forwarded events arrive from other machines on the same network when an administrator wants to use a computer that gathers multiple logs.

Using the Event Viewer

Microsoft includes the Event Viewer in its Windows Server and client operating system to view Windows event logs. Users access the Event Viewer by clicking the Start button and entering Event Viewer into the search field. Users can then select and inspect the desired log.

Windows Event Viewer
The Event Viewer application in the Windows operating system

Windows categorizes every event with a severity level. The levels in order of severity are information, warning, error and critical.

Most logs consist of information-based events. Logs with this entry usually mean the event occurred without incident or issue. An example of a system-based information event is Event 42, Kernel-Power which indicates the system is entering sleep mode.

Warning level events are based on particular events, such as a lack of storage space. Warning messages can bring attention to potential issues that might not require immediate action. Event 51, Disk is an example of a system-based warning related to a paging error on the machine’s drive.

An error level indicates a device may have failed to load or operate expectedly. Event 5719, NETLOGON is an example of a system error when a computer cannot configure a secure session with a domain controller.

Critical level events indicate the most severe problems. Event ID 41, Kernel-Power is an example of a critical system event when a machine reboots without a clean shutdown.

Other tools to view Windows event logs

Microsoft also provides the wevtutil command-line utility in the System32 folder that retrieves event logs, runs queries, exports logs, archives logs and clear logs.

Third-party utilities that also work with Windows event logs include SolarWinds Log & Event Manager, which provides real-time event correlation and remediation; file integrity monitoring; USB device monitoring; and threat detection. Log & Event Manager automatically collects logs from servers, applications and network devices.

ManageEngine EventLog Analyzer builds custom reports from log data and sends real-time text message and email alerts based on specific events.

Using PowerShell to query events

Microsoft builds Windows event logs in extensible markup language (XML) format with an EVTX extension. XML provides more granular information and a consistent format for structured data.

Administrators can build complicated XML queries with the Get-WinEvent PowerShell cmdlet to add or exclude events from a query.

Feds issue new alert on North Korean hacking campaigns

The FBI and the Department of Homeland Security released an alert on Tuesday regarding malware campaigns connected to a North Korean hacking group known as Hidden Cobra.

The alert, which includes indicators of compromise (IOCs) such as IP addresses, attributes two malware families to the North Korean government by way of Hidden Cobra: a remote access tool called Joanap and a worm known as Brambul, which spreads via Windows’ Server Message Block (SMB) protocol. Both malware families were first identified by Symantec in 2015 and were observed targeting South Korean organizations. Other cybersecurity vendors later attributed the two malware campaigns to the nation-state hacking group Hidden Cobra, also known as Lazarus Group.

However, Tuesday’s alert, which was issued by US-CERT, marks the first time U.S. authorities publicly attributed the malware families and their activity to North Korean hacking operations.

“FBI has high confidence that HIDDEN COBRA actors are using the IP addresses — listed in this report’s IOC files — to maintain a presence on victims’ networks and enable network exploitation,” US-CERT said. “DHS and FBI are distributing these IP addresses and other IOCs to enable network defense and reduce exposure to any North Korean government malicious cyber activity.”

The alert also claimed that, “according to reporting of trusted third parties,” Joanap and Brambul have likely been used by the North Korean hacking group since at least 2009 to target organizations in various vertical industries across the globe. The FBI and DHS didn’t identify those trusted parties, but the alert cited a 2016 report, titled “Operation Blockbuster Destructive Malware Report,” from security analytics firm Novetta, which detailed malicious activity conducted by the Lazarus Group.

DHS’ National Cybersecurity and Communications Integration Center conducted an analysis of the two malware families, and the U.S. government discovered 87 network nodes that had been compromised by Joanap and were used as infrastructure by Hidden Cobra. According to the US-CERT alert, those network nodes were located in various countries outside the U.S., including China, Brazil, India, Iran and Saudi Arabia.

The FBI and DHS attribution case for Brambul and Joanap represents the latest evidence connecting the North Korean government to high-profile malicious activity, including the 2014 breach of Sony Pictures. Last December, the White House publicly attributed the WannaCry ransomware attack to the North Korean government; prior to the U.S. government’s accusation, several cybersecurity vendors had also connected the WannaCry source code, which also exploited the SMB protocol, with the Brambul malware.

The US-CERT alert also follows tense, back-and-forth negotiations between President Donald Trump and North Korean leader Kim Jong Un regarding a U.S.-North Korea summit. Last week, Trump announced the U.S. was withdrawing from the summit, but talks have reportedly resumed.

Apple transparency report shows national security requests rising

The Apple transparency report for the second half of 2017 showed national security requests on the rise, and the number of devices included in requests is up sharply.

The latest semiannual Apple Report on Government and Private Party Requests for Customer Information detailed requests by governments around the world from July 1, 2017, through Dec. 31, 2017. According to Apple, although overall device requests are down, governments around the world have been using fewer requests to attempt to get information on far more accounts.

The Apple transparency report showed a slight year-over-year decrease in the total number of device requests received worldwide (30,184 in the second half of 2016 versus 29,718 in H2 2017), but the number of devices impacted by those requests more than doubled from 151,105 to 309,362.

Apple is not alone in receiving more government data requests; Google has reported similar increases, but Apple noted it has complied with a higher percentage of government data requests in the second half of 2017 (79%) compared to the same time period in 2016 (72%).

Apple’s transparency report shows the company has been complying with more of the government requests across multiple request types. Apple’s compliance with financial information requests was up year over year from 76% to 85%; account-based request compliance was up from 79% to 82%; and only compliance with emergency requests went down from 86% to 82%.

National security requests also rose sharply, according to the Apple transparency report. In the second half of 2016, Apple received between 5,750 and 5,999 national security requests and complied with the majority of them (between 4,750 and 4,999). In the same time period in 2017, Apple received more than 16,000 national security requests, but only provided data to the U.S. government in about half of those cases.

Richard Goldberg, principal and litigator at the law firm Goldberg & Clements in Washington, D.C., said he was struck by the large percentage of U.S. government requests made by either national security request or subpoena.

“Although Apple has challenged certain government requests aggressively in public, we don’t know how aggressive the company has been in private — which is especially relevant because these requests typically do not require a judge’s approval,” Goldberg said via email. “So the government collects this information, and it may never see the inside of a courtroom.”

Additional information

Goldberg added that the general level of detail in Apple’s transparency report is helpful, but suggested Apple “should break out administrative subpoenas from all other types.”

“Administrative subpoenas can have broad scope, because they often need only be related to something the agency is permitted to investigate, and they need not be connected to a grand jury proceeding or active litigation,” Goldberg said. “It’s a one-sided way for the government to demand information with little to no oversight, unless the recipient chooses to fight. And we don’t know how Apple makes that decision.”

According to Apple, the predominant reason for financial information requests around the world was credit card and iTunes gift card fraud and in multiple regions — including the U.S. — a “high number of devices specified in requests [was] predominantly due to device repair fraud investigations, fraudulent purchase investigations and stolen device investigations.”

It is unclear what data in the Apple transparency report correlates to the allegedly large number of devices the FBI and other law enforcement cannot access due to encryption, nor is it clear which data in the report correlates to iCloud backup data, which Apple has previously admitted to handing over to law enforcement.

SearchSecurity contacted Apple for clarification on these issues and Apple referred to its Legal Process Guidelines, which detailed the types of data in iCloud backups that Apple would be able to provide to law enforcement, including the subscriber’s name, address, email, telephone, mail logs, email content, iMessage data, SMS, photos and contacts.

However, Apple did note in the report that it would be adding “government requests to take down Apps from the App Store in instances related to alleged violations of legal and/or policy provisions,” starting with the transparency report for the second half of 2018.