Tag Archives: systems

ICS security fails the Black Hat test

The news at Black Hat 2018 wasn’t great when it came to industrial control systems. But while numerous sessions added up to sweeping condemnation of ICS security, there was at least the occasional saving grace that some vendors will correct some problems — at least some of the time. Still, the apparent lack of a security-conscious culture within these organizations means they’ll only fix the minimum, leaving similar products with the same underlying hardware, firmware and fatal bugs untouched and unsecured.

Speaking in a session, called “Breaking the IIoT: Hacking Industrial Control Gateways,” Thomas Roth, security researcher and founder of Leveldown Security, an embedded and ICS security consulting and research company based in Esslingen, Germany, walked through the security faults of a series of five gateway devices he’d found at prices he could afford on eBay. He wanted to look at commonly deployed, relatively current devices — things you find in the real world.

“If you go out on the network and start scanning, you’ll find thousands of these devices. In fact, you’ll find entire network ranges that are used almost exclusively for these devices,” he said.

“Often, they use static IP addresses with no VPN protection.” One device he looked at had a proprietary protocol for its wireless communications. But if you could break it — and he did — you had access to every one of those devices in the field, because the network addressing architecture was flat and unsegmented.

The first device he looked at was typical of his various experiments, tackling a Moxa W2150A which connects ICS devices to wireless networks via an Ethernet port on the device side and a wireless interface on the other side. In between the two interfaces is an easily opened case that reveals a circuit board with pads for connecting to a debugging port. Roth discovered, in a common theme across many of the devices discussed at the conference, the port was a serial terminal connection that booted directly to a root shell in Linux.

“This is a design decision, not a bug,” Roth said. But he noted that if you have the device and you can access a root shell, then as you are writing exploits, you can debug them directly on the device, “which is a pretty nice situation to be in.”

Roth noted the firmware for the device was available on the internet from the Moxa website, but it was encrypted. At first, this seemed like a dead end. But in looking at earlier firmware versions, he noticed one of the upgrades included adding the feature of encrypting the firmware.

This led him to an unencrypted update version, which included a package called “upgrade_firmware.” This, in turn, led to a function called “firmware_decrypt” — a function name that gave the audience a chuckle — which gave him plaintext access to the current version of the software. The decryption key was, needless to say, included in the upgrade code.

Roth raised an issue that hasn’t been much discussed in ICS security: supply chain security issues caused by the wide prevalence of openly accessible terminal access ports on devices. You can change the firmware, he said, write the changed version back to the device, return it to your distributor without mentioning the change, “and they will happily resell it to someone else.” In fact, he knows this because he conducted an experiment and was sold a device with firmware he had previously rewritten.

Roth discussed four more devices in some detail, with two of them still in the process of disclosure, “and there are a lot of fun issues.”

Beyond Roth’s pathway strewn with pwned gateways, there were other such sessions, including ones that found significant vulnerabilities in medical devices, cellular gateways, smart city infrastructure and satellite communications.

Jonathan Butts, CEO of security consultancy QED Secure Solutions, located in Coppell, Texas, noted in a press conference at the event that dealing with vendors around ICS security disclosure had been particularly frustrating. In the case of a pacemaker made by Medtronic, a protracted process leading to the company deciding that changes in the product weren’t necessary led Butts and co-speaker Billy Rios, founder of WhiteScope LLC, a cybersecurity company based in Half Moon Bay, Calif., to demonstrate their attack live and let the audience judge for themselves.

“To be honest,” Butts said, “after about the one-and-a-half-year mark, and you see stuff like [Medtronic’s response], you get fed up.”

ICS security: Protection? Not

While it’s theoretically possible to protect at least the devices that aren’t implanted in human bodies by placing the ICS equivalents of a firewall at strategic network junction points, a session by Airbus security evaluators Julien Lenoir and Benoit Camredon showed a widely deployed ICS firewall made by Belden could be remotely exploited.

The Tofino Xenon device is typically situated between the IP-based control network and local ICS assets that use Modbus, EtherNet/IP or OPC protocols. Interestingly, the device itself doesn’t have an IP address; it is essentially invisible to ordinary interrogation on the network.

A custom protocol allows a Windows machine running a configurator to discover and then send configuration data to a Xenon device. The configurator knows the addresses of protected ICS devices and knows the Xenon is somewhere between the configurator and the devices. The Xenon knows to watch for packets that carry a specific payload and recognizes them as packets from a configurator.

The two researchers were able to reverse-engineer the protocol enough to understand the arrangement that was used for encryption keys. The configurator discovers devices using a common key and then generates two additional keys that are unique to the particular pairing of that configurator and that specific firewall. All of these keys could be extracted from the discovery session, and then the keys unique to the device were used to establish a connection with the device.

“We were able to get a root shell,” Lenoir told the audience, heralding the familiar theme that almost all ICS devices are actually outdated Linux kernels. “Once everything was running as root, now the appliance was no longer a black box, but was instead a Linux kernel.”

From here, they settled on an attack model that used the devices’ ability to be updated from files on a USB stick. Camredon explained the updates comprised two files, both encrypted. “One is an update script, and one is a data file that is an image, including an image of the kernel.”

It turned out that all configurators and all Tofino Xenon devices used the same key for decrypting the update files. Because they had access to root on the Xenon, they were able to extract this key, at which point they further discovered there were no checks in the update script to ensure the data file hadn’t been tampered with since it was created.

Thus, a breached Xenon could be modified in whatever way the attackers wanted, an image of that system made, and the image could be encrypted and included in an update package without the separate installation script detecting the change.

The Xenon has been updated to correct these problems since the researchers disclosed their findings. So, in theory, the firewall is back in business. One problem Roth noted, though, is these systems often come in dozens of variants, with different names and model numbers.

“If you report a bug to some of these vendors,” Roth said, “the vulnerability gets fixed, but then there are 10 different devices which run the same firmware, and they are left completely unpatched.”

Roth suggested this was a clear indication of the lack of security culture at many ICS vendors.

“It’s like exploiting in the ’90s,” he concluded. “We have no integrity protections on any of these devices.”

At another moment, he made a sweeping generalization: “Everything runs as root; everything runs on outdated Linux kernels; everything runs on outdated web servers. If any of these components fails, you have root permission.”

Irregularities discovered in WinVote voting machines

LAS VEGAS — The insecurity of electronic voting systems has been well-documented, but so far there has been no concrete evidence that those systems have been hacked in the field. However, a forensic analysis by security researcher Carsten Schuermann discovered irregularities in eight WinVote voting machines used in Virginia elections for more than a decade.

Speaking at Black Hat 2018, Schuermann, associate professor at IT University of Copenhagen, presented data that showed voting machine irregularities in WinVote systems used in a variety of state and federal elections from 2004 to 2014. In his session, titled “Lessons from Virginia – A Comparative Forensic Analysis of WinVote Voting Machines,” Schuermann also pushed for mandated paper ballots and regular audits to mitigate potential threats.

“When you add technology to the voting process, you clearly increase its attack surface,” Schuermann said.

Schuermann noted that there are actually two problems with insecure voting machines. The first is obvious — the systems can be easily hacked.

“That’s a real threat,” he said. “But the other threat is equally important and equally dangerous, and that is the threat of an alleged

cyberattack
— when people claim there was a

cyberattack
when there actually wasn’t.”

Such allegations can disrupt elections and damage the credibility of voting results. And since too many voting machines don’t produce paper trails, he said, those allegations can be as damaging as a real

cyberattack
.

Schuermann had such a voting machine with him on stage — a decommissioned WinVote system that had a printer but only printed vote tallies and not individual ballots. He said he obtained eight WinVote voting machines from an unnamed source two years ago, and first hacked into one of the machines for a DEFCON Voting Village session last year.

Schuermann followed up with a deeper forensic analysis that uncovered concerning voting machine irregularities as well as serious vulnerabilities. He told the audience that while he had access to the machines’ SSDs, he did not have any access to memory or memory dumps, security logs or a record of wireless connections.

But what data was available showed a number of holes that hackers could exploit, including open ports (135, 139, 445 and 3387, among others) and unpatched versions of Windows XP Embedded from 2002 that were vulnerable to a critical buffer overflow attack, CVE-2003-0352.

“Another problem is that this machine has wireless turned on all the time,” Schuermann said, adding that the wireless password for the systems was “ABCDE.” “That’s not a very secure password.”

I have only one conclusion, and that is, use paper and do your audits.
Carsten Schuermannassociate professor, IT University of Copenhagen

Those vulnerabilities in themselves didn’t prove the machines had been hacked, but a closer examination of files on some of the WinVote voting machines showed unexplained anomalies. One of the machines, for example, had MP3s of a Chinese pop song and traces of CD ripping software, and data showed the machine broadcast the song on the internet. That was strange, he said, but there were more concerning voting machine irregularities.

For example, three of the machines used during the 2005 Virginia gubernatorial election dialed out via their modems on Election Day, though the data didn’t explain why. Schuermann speculated that perhaps the systems were getting a security update, but one of the machines actually dialed the wrong number.

In addition, two of the systems that were used in the 2013 Virginia state elections had more than 60 files modified on Election Day before the polls closed. In addition, USB devices connected to one of the machines while the polls were open.

“That’s really bizarre,” he said.

It was unclear whether the files were modified as part of a system update, he said, and there wasn’t enough data to explain what those USB connections were for. Schuermann cautioned the audience that the voting machine irregularities weren’t necessarily evidence of hacking, but he said the uncertainty about the irregularities should serve as a call to action. Only a few states, he said, have electronic voting systems that produce paper ballots and can be audited.

“I have only one conclusion,” he said. “And that is, use paper and do your audits.”

SIEM benefits include efficient incident response, compliance

Security information and event management systems collect security log events from numerous hosts within an enterprise and store their relevant data centrally. By bringing this log data together, these SIEM products enable centralized analysis and reporting on an organization’s security events.

SIEM benefits include detecting attacks that other systems missed. Some SIEM tools also attempt to stop attacks — assuming the attacks are still in progress.

SIEM products have been available for many years, but initial security information and event management (SIEM) tools were targeted at large organizations with sophisticated security capabilities and ample security analyst staffing. It is only relatively recently that SIEM systems have emerged that are well-suited to meet the needs of small and medium-sized organizations.

SIEM architectures available today include SIEM software installed on a local server, a local hardware or virtual appliance dedicated to SIEM, and a public cloud-based SIEM service.

Different organizations use SIEM systems for different purposes, so SIEM benefits vary across organizations. This article looks at the three top SIEM benefits, which are:

  • streamlining compliance reporting;
  • detecting incidents that would otherwise not be detected; and
  • improving the efficiency of incident handling

1. Streamline compliance reporting

Many organizations deploy the tools for these SIEM benefits alone, including streamlining enterprise compliance reporting efforts through a centralized logging solution. Each host that needs to have its logged security events included in reporting regularly transfers its log data to a SIEM server. A single SIEM server receives log data from many hosts and can generate one report that addresses all of the relevant logged security events among these hosts.

An organization without a SIEM system is unlikely to have robust centralized logging capabilities that can create rich customized reports, such as those necessary for most compliance reporting efforts. In such an environment, it may be necessary to generate individual reports for each host or to manually retrieve data from each host periodically and reassemble it at a centralized point to generate a single report.

Many organizations deploy the tools for these SIEM benefits alone, including streamlining enterprise compliance reporting efforts through a centralized logging solution.

The latter can be incredibly difficult, in no small part because different operating systems, applications and other pieces of software are likely to log their security events in various proprietary ways, making correlation a challenge. Converting all of this information into a single format may require extensive code development and customization.

Another reason why SIEM tools are so useful is that they often have built-in support for most common compliance efforts. Their reporting capabilities are compliant with the requirements mandated by standards such as the Health Insurance Portability and Accountability Act (HIPAA), the Payment Card Industry Data Security Standard (PCI DSS) and the Sarbanes-Oxley Act.

By using SIEM logs, an organization can save considerable time and resources when meeting its security compliance reporting requirements, especially if it is subject to more than one such compliance initiative.

2. Detect the undetected

SIEM systems are able to detect otherwise undetected incidents.

Many hosts that log security breaches do not have built-in incident detection capabilities. Although these hosts can observe events and generate audit log entries for them, they lack the ability to analyze the log entries to identify signs of malicious activity. At best, these hosts, such as end-user laptops and desktops, might be able to alert someone when a particular type of event occurs.

SIEM tools offer increased detection capabilities by correlating events across hosts. By gathering events from hosts across the enterprise, a SIEM system can see attacks that have different parts on different hosts and then reconstruct the series of events to determine what the nature of the attack was and whether or not it succeeded.

In other words, while a network intrusion prevention system might see part of an attack and a laptop’s operating system might see another part of the attack, a SIEM system can correlate the log data for all of these events. A SIEM tool can determine if, for example, a laptop was infected with malware which then caused it to join a botnet and start attacking other hosts.

It is important to understand that while SIEM tools have many benefits, they should not replace enterprise security controls for attack detection, such as intrusion prevention systems, firewalls and antivirus technologies. A SIEM tool on its own is useless because it has no ability to monitor raw security events as they happen throughout the enterprise in real time. SIEM systems use log data as recorded by other software.

Many SIEM products also have the ability to stop attacks while they are still in progress. The SIEM tool itself doesn’t directly stop an attack; rather, it communicates with other enterprise security controls, such as firewalls, and directs them to block the malicious activity. This incident response capability enables the SIEM system to prevent security breaches that other systems might not have noticed elsewhere in the enterprise.

To take this a step further, an organization can choose to have its SIEM tool ingest threat intelligence data from trusted external sources. If the SIEM tool detects any activity involving known malicious hosts, it can then terminate those connections or otherwise disrupt the malicious hosts’ interactions with the organization’s hosts. This surpasses detection and enters the realm of prevention.

3. Improve the efficiency of incident handling activities

Another of the many SIEM benefits is that SIEM tools significantly increase the efficiency of incident handling, which in turn saves time and resources for incident handlers. More efficient incident handling ultimately speeds incident containment, thus reducing the amount of damage that many security breaches and incidents cause.

A SIEM tool can improve efficiency primarily by providing a single interface to view all the security log data from many hosts. Examples of how this can expedite incident handling include:

  • it enables an incident handler to quickly identify an attack’s route through the enterprise;
  • it enables rapid identification of all the hosts that were affected by a particular attack; and
  • it provides automated mechanisms to stop attacks that are still in progress and to contain compromised hosts.

The benefits of SIEM products make them a necessity

The benefits of SIEM tools enable an organization to get a big-picture view of its security events throughout the enterprise. By bringing together security log data from enterprise security controls, host operating systems, applications and other software components, a SIEM tool can analyze large volumes of security log data to identify attacks, security threats and compromises. This correlation enables the SIEM tool to identify malicious activity that no other single host could because the SIEM tool is the only security control with true enterprise-wide visibility.      

Businesses turn to SIEM tools, meanwhile, for a few different purposes. One of the most common SIEM benefits is streamlined reporting for security compliance initiatives — such as HIPAA, PCI DSS and Sarbanes-Oxley — by centralizing the log data and providing built-in support to meet the reporting requirements of each initiative.

Another common use for SIEM tools is detecting incidents that would otherwise be missed and, when possible, automatically stopping attacks that are in progress to limit the damage.

Finally, SIEM products can also be invaluable to improve the efficiency of incident handling activities, both by reducing resource utilization and allowing real-time incident response, which also helps to limit the damage.

Today’s SIEM tools are available for a variety of architectures, including public cloud-based services, which makes them suitable for use in organizations of all sizes. Considering their support for automating compliance reporting, incident detection and incident handling activities, SIEM tools have become a necessity for virtually every organization.

BGP hijacking attacks target payment systems

Researchers discovered BGP hijacking attacks targeting payment processing systems and using new tricks to maximize the attackers hold on DNS servers.

Doug Madory, director of internet analysis at Oracle Dyn, previously saw border gateway protocol (BGP) hijacking attacks in April 2018 and has seen them continue through July. The first attack targeted an Amazon DNS server in order to lure victims to a malicious site and steal cryptocurrency, but more recent attacks targeted a wider range of U.S. payment services.

“As in the Amazon case, these more recent BGP hijacks enabled imposter DNS servers to return forged DNS responses, misdirecting unsuspecting users to malicious sites.  By using long TTL values in the forged responses, recursive DNS servers held these bogus DNS entries in their caches long after the BGP hijack had disappeared — maximizing the duration of the attack,” Madory wrote in a blog post. “The normal TTL for the targeted domains was 10 minutes (600 seconds).  By configuring a very long TTL, the forged record could persist in the DNS caching layer for an extended period of time, long after the BGP hijack had stopped.”

Madory detailed attacks on telecom companies in Indonesia and Malaysia as well as BGP hijacking attacks on U.S. credit card and payment processing services, the latter of which lasted anywhere from a few minutes to almost three hours. While the payment services attacks featured similar techniques to the Amazon DNS server attack, it’s unclear if the same threat actors are behind them.

Justin Jett, director of audit and compliance for Plixer, said BGP hijacking attacks are “extremely dangerous because they don’t require the attacker to break into the machines of those they want to steal from.”

“Instead, they poison the DNS cache at the resolver level, which can then be used to deceive the users. When a DNS resolver’s cache is poisoned with invalid information, it can take a long time post-attacked to clear the problem. This is because of how DNS TTL works,” Jett wrote via email. “As Oracle Dyn mentioned, the TTL of the forged response was set to about five days. This means that once the response has been cached, it will take about five days before it will even check for the updated record, and therefore is how long the problem will remain, even once the BGP hijack has been resolved.”

Madory was not optimistic about what these BGP hijacking attacks might portend because of how fundamental BGP is to the structure of the internet.

“If previous hijacks were shots across the bow, these incidents show the internet infrastructure is now taking direct hits,” Madory wrote. “Unfortunately, there is no reason not to expect to see more of these types of attacks against the internet.”

Matt Chiodi, vice president of cloud security at RedLock was equally as worried and warned that these BGP hijacking attacks should be taken as a warning.

“BGP and DNS are the silent warriors of the internet and these attacks are extremely serious because nearly all other internet services assume they are secure. Billions of users rely on these mostly invisible services to accomplish everything from Facebook to banking,” Chiodi wrote via email. “Unfortunately, mitigating BGP and DNS-based attacks is extremely difficult given the trust-based nature of both systems.”

Reddit breach sparks debate over SMS 2FA

Reddit admitted its systems were breached after an attacker was able to compromise the short message service two-factor authentication used by employees.

According to Christopher Slowe, CTO and founding engineer at Reddit, the main attack leading to the Reddit breach involved a threat actor intercepting SMS-based 2FA codes.

“On June 19, we learned that between June 14 and June 18, an attacker compromised a few of our employees’ accounts with our cloud and source code hosting providers. Already having our primary access points for code and infrastructure behind strong authentication requiring two-factor authentication (2FA), we learned that SMS-based authentication is not nearly as secure as we would hope, and the main attack was via SMS intercept,” Slowe wrote in a post on the social news site. “We point this out to encourage everyone here to move to token-based 2FA.”

Slowe wrote that the attacker accessed user data, including some current email addresses, as well as “account credentials (username + salted hashed passwords), email addresses, and all content (mostly public, but also private messages)” from 2007. The attacker was apparently limited to read-only access on Reddit systems, and Reddit has since rotated all production secrets and API keys and took steps to harden access management security with “enhanced logging, more encryption and requiring token-based 2FA to gain entry since we suspect weaknesses inherent to SMS-based 2FA to be the root cause of this incident.”

SMS 2FA security

Because the Reddit breach was blamed on the security limitations inherent to SMS-based 2FA, experts have begun to debate whether or not it’s worth using it as an authentication method.

What can be taken from this attack is that, while SMS authentication can be used to boost security, two-factor authentication that involves standalone hardware token generators is needed to mitigate the risk of such attacks.
Leigh-Anne Gallowaycybersecurity resilience lead, Positive Technologies

Even back in 2016, when NIST advised organizations to stop using SMS-based 2FA, experts said the recommendation was overdue because of known techniques to intercept one-time codes sent via SMS either via malware on smartphones, exploiting the SS7 protocol, or by cloning a victim’s SIM card.

Craig Young, computer security researcher for Tripwire’s Vulnerability and Exposures Research Team, noted that “while SMS interception has been a common trick in opportunistic financial fraud, it is far less common to hear about this method being used in this type of targeted attack of a public service.”

“Although any form of multi-factor authentication is a considerable improvement on simple password models, SMS-based verification tokens can be stolen with a variety of well-known techniques, including social engineering, mobile malware, or by directly intercepting and decrypting signals from cell towers,” Young wrote via email. “An attacker within the same cellular coverage area as the victim could even intercept and decrypt SMS out of the air with just a couple hundred dollars’ worth of equipment. The moral of this story is that SMS-based two-factor authentication should not be considered ‘strong’ in the face of a determined attacker.”

However, there was no clear consensus among experts about SMS-based 2FA. Many acknowledged the flaws in the system, but noted it was still better than not using 2FA at all.

Pravin Kothari, CEO of CipherCloud, said it is still far too common for users to not use any 2FA.

“Today, use of two-factor authentication is a best practice still not used by most authenticating systems. Even when two-factor is offered, for example, in Google’s Gmail, over 90% of the Gmail users don’t opt to use it,” Kothari wrote via email. “Given that two-factor authentication is still a best practice the likely move by financial institutions will be to utilize token-based SMS systems, instead of mobile phone-based systems. In any case two-factor authentication, even with a mobile phone, is still much better than not using two-factor.”

Leigh-Anne Galloway, cybersecurity resilience lead at Positive Technologies, said the Reddit breach is an example of businesses placing “unwarranted faith in two-factor authentication.”

“While lots of organizations think 2FA is a silver bullet for authentication, it actually isn’t, thanks to weaknesses in mobile networks which allow SMS [messages] to be intercepted,” Galloway wrote via email. “What can be taken from this attack is that, while SMS authentication can be used to boost security, two-factor authentication that involves standalone hardware token generators is needed to mitigate the risk of such attacks. SMS alone is not enough to constitute adequate defense of customer and employee data.”

Ilia Kolochenko, CEO of High-Tech Bridge, said he would “refrain from blaming 2FA SMS — in many cases it’s still better than nothing.”

“Moreover, when most of business critical applications have serious vulnerabilities varying from injections to [remote code execution], 2FA hardening is definitely not the most important task to take care of,” Kolochenko wrote via email, adding that there may be more to the Reddit breach story. “I would equally be cautiously optimistic about the size of the disclosed data breach and thoroughly ascertain that no other systems or user accounts were compromised. Often large-scale attacks are conducted in parallel by several interconnected cybercrime groups aimed to distract, confuse and scare security teams. While attack vectors of the first group are being mitigated, others are actively exploited, often not without success.”

Report: ERP security is weak, vulnerable and under attack

ERP systems are seeing growing levels of attack for two reasons. First, many of these systems — especially in the U.S. — are now connected to the internet. Second, ERP security is hard. These systems are so complex and customized that patching is expensive, complicated and often put off. 

Windows systems are often patched within days, but users may wait years to patch some ERP systems. There are old versions of PeopleSoft and other ERP applications, for instance, that are out-of-date and connected to the internet, according to researchers at two cybersecurity firms, which jointly looked at the risks faced in ERP security.

These large corporate systems, which manage global supply chains and manufacturing operations, could be compromised and shut down by an attacker, said Juan Pablo Perez-Etchegoyen, CTO of Onapsis, a cybersecurity firm based in Boston.

“If someone manages to breach one of those [ERP] applications, they could literally stop operations for some of those big players,” Perez-Etchegoyen said in an interview. His firm, along with Digital Shadows, released a report, “ERP Applications Under Fire: How Cyberattackers Target the Crown Jewels,” which was recently cited as a must-read by the U.S. Computer Emergency Readiness Team within the Department of Homeland Security. This report looked specifically at Oracle and SAP ERP systems.

Warnings of security vulnerabilities are not new

Cybersecurity researchers have been warning for a long time that U.S. critical infrastructure is vulnerable. Much of the focus has been on power plants and other utilities. But ERP systems are managing critical infrastructure, and the report by Onapsis and Digital Shadows is seen backing up a broader worry about infrastructure risks.

“The great risk in ERP is disruption,” said Alan Paller, the founder of SANS Institute, a cybersecurity research and education organization in Bethesda, Md.

If the attackers were just interested in extortion or gaining customer data, there are easier targets, such as hospitals and e-commerce sites, Paller said. What the attackers may be doing with ERP systems is prepositioning, which can mean planting malware in a system for later use.

In other words, attackers “are not sure what they are going to do” once they get inside an ERP system, Paller said. But they would rather get inside the system now, and then try to gain access later, he said.

The report by Onapsis and Digital Shadows found an increase among hackers in ERP-specific vulnerabilities. This interest has been tracked on a variety of sources, including the dark web, which is a part of the internet accessible only through special networks.

Complexity makes ERP security difficult

The complexity of ERP applications makes it really hard and really costly to apply patches.
Juan Pablo Perez-EtchegoyenCTO, Onapsis

The problem facing ERP security, Perez-Etchegoyen said, is “the complexity of ERP applications makes it really hard and really costly to apply patches. That’s why some organizations are lagging behind.”

SAP and Oracle, in emailed responses to the report, both said something similar: Customers need to stay up-to-date on patches.

“Our recommendation to all of our customers is to implement SAP security patches as soon as they are available — typically on the second Tuesday of every month — to protect SAP infrastructure from attacks,” SAP said.

Oracle pointed out that it “issued security updates for the vulnerabilities listed in this report in July and in October of last year. The Critical Patch Update is the primary mechanism for the release of all security bug fixes for Oracle products. Oracle continues to investigate means to make applying security patches as easy as possible for customers.”

One of the problems is knowing the intent of the attackers, and the report cited a full range of motives, including cyberespionage, which is sabotage by a variety of groups, from hacktivists to foreign countries.

Next wave of attacks could be destructive

But one fear is the next wave of major attacks will attempt to destroy or cause real damage to systems and operations.

This concern was something Edward Amoroso, retired senior vice president and CSO of AT&T, warned about.

In a widely cited open letter in November 2017 to then-President-elect Donald Trump, Amoroso said attacks “will shift from the theft of intellectual property to destructive attacks aimed at disrupting our ability to live as free American citizens.” The ERP security report’s findings were consistent with his earlier warning, he said in an email.

Foreign countries know that “companies like SAP, Oracle and the like are natural targets to get info on American business,” Amoroso said. “All ERP companies understand this risk, of course, and tend to have good IT security departments. But going up against military actors is tough.”

Amoroso’s point about the risk of a destructive attack was specifically cited and backed by a subsequent MIT report, “Keeping America Safe: Toward More Secure Networks for Critical Sectors.”  The MIT report warned that attackers enjoy “inherent advantages owing to human fallibility, architectural flaws in the internet and the devices connected to it.”

Vendor admits election systems included remote software

Election system security was compromised by the installation of remote access software on systems over the span of six years, a vendor admitted in a letter to a senator.

Election Systems & Software (ES&S), a voting machine manufacturer based in Omaha, Neb., admitted it installed the flawed PCAnywhere remote access software on its election management system (EMS) workstations for a “small number of customers between 2000 and 2006,” according to a letter sent to Sen. Ron Wyden (D-Ore.) that was obtained by Motherboard.

The PCAnywhere source code was stolen from Symantec servers in 2006, leaving the software vulnerable, and further issues in 2012 caused Symantec to suggest users uninstall the program before officially putting PCAnywere to its end of life in 2014.

ES&S had previously denied knowledge of the use of remote access software on its election management systems, but told Wyden about the vulnerable software that could have put voting machine security at risk. ES&S wrote that it stopped installing the PCAnywhere software in December 2007 due to new policies enacted by the Election Assistance Commission regarding voting machine security.

Gene Shablygin, CEO and founder of WWPass, an identity and access management company based in Manchester, N.H., said the actions by ES&S were “pretty consistent with the overall state of computer security” for the time.

“Today, these technologies and general approaches are totally unacceptable, and must be completely reworked. The last decade especially, was the period of explosive growth of hacking technologies, and the defensive side of many systems was left in the dust. So, most of the systems that are still in use — and voting systems are no exception — have multiple vulnerabilities, some of which are zero-day, or not yet discovered,” Shablygin wrote via email. “You can’t stop progress, and sooner or later, remote voting will become a matter of everyday life.”

Lane Thames, senior security researcher at Tripwire, agreed that the failures of ES&S with election system security shouldn’t be surprising, “especially during the 2000 to 2007 timeframe when cybersecurity was hardly ever on the roadmap for companies producing computing systems.”

“Another concerning point is the underlying arguments that imply the devices built from 2000 to 2007 are still in use. As with many critical infrastructure systems, costs can prohibit frequent hardware refresh cycles,” Thames wrote via email. “As such, many voting machines are likely to contain older operating systems and other software with many vulnerabilities due to these systems not being able to be updated with operating system patches and such. This is a challenging problem we face with all of our critical infrastructure, with very few good solutions at this time.”

ES&S did not respond to requests for comment and it is unclear if the affected election systems were ever fixed or if they are still in use.

Fixing voting machine security

Voting machine security was already proved to be in a troubling state after hackers at Defcon 2016 were able to crack all systems tested within just a few days.

Every system charged with securing our government’s processes … should be open to large security audits.
Jonathan SanderCTO, Stealthbits Technologies

Sean Newman, director of product management at Corero Network Security, said the news about PCAnywhere will make “little difference” in the likelihood of finding other election system security issues.

“They run software and, if they have any kind of internet connectivity, even for managing the voting system/process itself, then there’s a reasonable chance that vulnerabilities exist, which could provide unauthorized users with the ability to have an impact on the normal operation of the system,” Newman wrote via email. “The focus should be for vendors, like ES&S, to ensure they use secure coding practices to develop the software for such systems and avoid any need to expose such systems to the public Internet.”

Jonathan Sander, CTO at Stealthbits Technologies, noted that government “pressures to do everything cheaply and with world class, state actor proof security are in tension” when it comes to election system security and outside audits are needed.

“Every system charged with securing our government’s processes — a.k.a. protecting our collective benefit — should be open to large security audits. To sell anything to the federal government you need to go through tons of certifications. But that’s not enough,” Sander wrote via email. “Bug bounties to get the hacker community to find vulnerabilities, open review at a source level for all solutions to be used in government, and mandatory standards for any remote access features should be table stakes for putting in systems like this.”

Thames notes that a major issue is that “although the U.S. electoral infrastructure is part of the nation’s critical infrastructure, it is still largely up to local and state agencies to ultimately enforce security of the systems.”

“Herein lies another challenging problem. Local and state agencies likely have little to no expertise or budget for securing their voting systems. Every time I go to the voting polls, I see mostly volunteers with a few dedicated staff. Most volunteers at the polls will not have experience with cyber and/or physical security issues related to voting machines,” Thames wrote. “Moreover, the nation already has a significant deficit for staffing our cyber security departments, in both government and industry. Funding will likely need to be increased, somehow, for local and state government agencies in order to provide adequate security for our voting systems.”

Container orchestration systems at risk by being web-accessible

Researchers found more than 21,000 container orchestration systems are at risk simply because they are accessible via the web.

Security researchers from Lacework, a cloud security vendor based in Mountain View, Calif., searched for popular container orchestration systems, like Kubernetes, Docker Swarm, Mesosphere and OpenShift, and they found tens of thousands of administrator dashboards were accessible on the internet. According to Lacework’s report, this exposure alone could leave organizations at risk because of the “potential for attack points caused by poorly configured resources, lack of credentials and the use of nonsecure protocols.”

“There are typically two critical pieces to managing these systems. First is a web UI and associated APIs. Secondly, an administrator dashboard and API are popular because they allow users to essentially run all aspects of a container cluster from a single interface,” Lacework’s researchers wrote in its report. “Access to the dashboard gives you top-level access to all aspects of administration for the cluster it is assigned to manage, [including] managing applications, containers, starting workloads, adding and modifying applications, and setting key security controls.”

Dan Hubbard, chief security architect at Lacework, said these cloud container orchestration systems represent a significant change from traditional security.

“In the old data center days, it was easy to set policy around who could access admin consoles, as you would simply limit it to your corporate network and trusted areas. The cloud, combined with our need to work from anywhere, changes this dramatically, and there are certainly use cases to allow remote administration over the internet,” Hubbard said via email. “That said, it should be done in a secure way. Extra security measures like multifactor authentication, enforced SSL, [role-based access controls], a proxy in front of the server to limit access or a ‘jump server’ are all ways to do this. This is something that security needs to be aware of.”

Lacework reported that more than 300 of the exposed container orchestration systems’ dashboards did not have credentials implemented to limit access, and “38 servers running healthz [web application health and security checker] live on the Internet with no authentication whatsoever were discovered.”

Hubbard added that “these sites had security weaknesses that could have enabled hackers to either attack directly these nodes or provide hackers with information that would allow them to attack more easily the company owning these nodes.” 

However, despite warning of potential risks to these container orchestration systems, Hubbard and Lacework could not expand on specific threats facing any of the nearly 22,000 accessible dashboards described in the report.

“Technically, they are all connected to the internet and their ports are open, so attackers can gain privileged access or discover information about the target,” Hubbard said. “With respect to flaws, we did not perform any password cracking or dictionary attacks against the machines or vulnerability scans. However, we did notice that a lot of the machines had other services open besides the container orchestration, and that certainly increases the attack surface.”

Tableau acquisition of MIT AI startup aims at smarter BI software

Tableau Software has acquired AI startup Empirical Systems in a bid to give users of its self-service BI platform more insight into their data. The Tableau acquisition, announced today, adds an AI-driven engine that’s designed to automate the data modeling process without requiring the involvement of skilled statisticians.

Based in Cambridge, Mass., Empirical Systems started as a spinoff from the MIT Probabilistic Computing Project. The startup claims its analytics engine and data platform is able to automatically model data for analysis and then provide interactive and predictive insights into that data.

The technology is still in beta, and Francois Ajenstat, Tableau’s chief product officer, wouldn’t say how many customers are using it as part of the beta program. But he said the current use cases are broad and include companies in retail, manufacturing, healthcare and financial services. That wide applicability is part of the reason why the Tableau acquisition happened, he noted.

Catch-up effort with advanced technology

In some ways, however, the Tableau acquisition is a “catch-up play” on providing automated insight-generation capabilities, said Jen Underwood, founder of Impact Analytix LLC, a product research and consulting firm in Tampa. Some other BI and analytics vendors “already have some of this,” Underwood said, citing Datorama and Tibco as examples.

The Tableau acquisition adds an AI-driven engine that’s designed to automate the data modeling process without requiring the involvement of skilled statisticians.

Empirical’s automated modeling and statistical analysis tools could put Tableau ahead of its rivals, she said, but it’s too soon to tell without having more details on the integration plans. Nonetheless, she said she thinks the technology will be a useful addition for Tableau users.

“People will like it,” she said. “It will make advanced analytics easier for the masses.”

Tableau already has been investing in AI and machine learning technologies internally. In April, the company released its Tableau Prep data preparation software, with embedded fuzzy clustering algorithms that employ AI to help users group data sets together. Before that, Tableau last year released a recommendation engine that shows users recommended data sources for analytics applications. The feature is similar to how Netflix suggests movies and TV shows based on what a user has previously watched, Ajenstat explained.

Integration plans still unclear

Ajenstat wouldn’t comment on when the Tableau acquisition will result in Empirical’s software becoming available in Tableau’s platform, or whether customers will have to pay extra for the technology.

[embedded content]

Empirical CEO Richard Tibbetts on its automated data
modeling technology.

“Whether it’s an add-on or how it’s integrated, it’s too soon to talk about that,” he said.

However, he added that the Empirical engine will likely be “a foundational element” in Tableau, at least partially running behind the scenes, with a goal that “a lot of different things in Tableau will get smarter.”

Unlike some predictive algorithms that require large stores of data to function properly, Empirical’s software works with “data of all sizes, both large and small,” Ajenstat said. When integration does eventually begin to happen, Ajenstat said Tableau hopes to be able to better help users identify trends and outliers in data sets and point them toward factors they could drill into more quickly.

Augmented analytics trending

Tableau’s move around augmented analytics is in line with what Gartner pointed to as a key emerging technology in its 2018 Magic Quadrant report on BI and analytics platforms.

Various vendors are embedding machine learning tools into their software to aid with data preparation and modeling and with insight generation, according to Gartner. The consulting and market research firm said the augmented approach “has the potential to help users find the most important insights more quickly, particularly as data complexity grows.”

Such capabilities have yet to become mainstream product requirements for BI software buyers, Gartner said in the February 2018 report. But they are “a proof point for customers that vendors are innovating at a rapid pace,” it added.

The eight-person team from Empirical Systems will continue to work on the software after the Tableau acquisition. Tableau, which didn’t disclose the purchase price, also plans to create a research and development center in Cambridge.

Senior executive editor Craig Stedman contributed to this story.

Industry vet Lewis reveals plan to save Violin flash storage

Mark Lewis said when he became CEO of Violin Systems a few months back, a friend asked, “Why are you joining Violin?”

The friend knew Violin’s history. Lewis was replacing ailing Ebrahim Abbasi as CEO, taking over a company that had been through several chapters already — including Chapter 11 bankruptcy. Violin emerged from bankruptcy in 2017 when private equity firm Quantum Partners put up $25 million to satisfy Violin’s outstanding debt and fund restructuring.

Lewis first joined Violin as a consultant to Abbasi, following the demise of Lewis’ venture capital-backed Formation Data Systems. Formation Data couldn’t get its software-defined storage product to market fast enough to satisfy investors. When Abbasi stepped down for medical reasons, Lewis took over as Violin’s CEO.

“We are effectively a startup,” Lewis said of Violin.

Violin went from a startup in 2009 to a public company in 2013, fueled by early success as a flash storage pioneer. But the rest of the storage field jumped into all-flash and both startups and established vendors surpassed Violin sales.

Lewis’ experience as a storage executive goes back to 1990, when he became general manager of Digital Equipment Corp.’s storage OEM business. He followed with executive positions at Compaq, Hewlett-Packard and EMC before founding Formation in 2013.

Lewis has recruited other industry veterans to Violin. Rick Ruskin is the senior vice president of worldwide field operations, after occupying a similar post at Kaminario. Gary Lyng, vice president of product management and marketing, previously worked for SanDisk, and his resume includes stops at EMC, NetApp, Hewlett Packard Enterprise and Veritas.

We spoke with Lewis about why he joined Violin and his strategy for plotting its comeback.

Given Violin’s ups and downs, how challenging has it been to maintain the confidence of existing customers and win new Violin flash storage accounts?

Mark Lewis, CEO of Violin SystemsMark Lewis

Mark Lewis: I had a sales friend tell me ‘Congrats on [joining] Violin, but why are you joining Violin?’ Everybody has seen the press about the bankruptcy and thinks things here must be terrible. Clearly some bad execution happened to drive us there, but that all helped to battle-harden the technology. I’m excited by the potential we have.

One of the things that impressed me most when I got here was the incredible IP [intellectual property] Violin has. The IP was in great shape, but we didn’t have a lot of process. Cuts had been made to the sales team. We had no growth engine because we had no marketing and sales. Now that we have private equity funding, we have rebuilt a channel partner-oriented sales team. We’re rebuilding relationships with existing customers, and we already have 20 new logo opportunities [for new customers] this quarter.

We still have a lot of large installs. About 150 companies of the Fortune 500 currently use Violin to run their most mission-critical applications. Now that we’re back to being privately owned, we can leverage our IP assets and installed base to re-establish ourselves on a deeper footing. We’re not doing cloud tiering. We’re not doing object stores. We’re not even doing file storage. We’re sticking with block storage, Fibre Channel, iSCSI Ethernet and our approach of selling only through the channel. We think Violin flash becomes a complement that can coexist within your environment.

Why didn’t Formation Data Systems survive, and how will those lessons help you in leading Violin and succeeding with Violin Flash Storage?

Lewis: They are two products in completely different markets. At Formation, I advocated for a certain portion of the market to move to software-defined storage for midperformance or bulk storage. It’s a valid market, but what we learned at Formation is that there isn’t a lot that’s broken in the classic storage market. People in storage tend to be conservative. I believe software-defined storage ultimately will be successful, but it won’t happen quickly. It will be a multidecade transition.

The core weakness of the original Violin flash storage technology was that it was built as very fast storage hardware, with no software. That made it a struggle for enterprises to adopt it. It was a hard lesson to learn. Obviously the company struggled for a long time getting that right.

The good news is we’ve now got a platform. We have battle-tested software capabilities that we’re building on, and we will evolve to using more off-the-shelf commodity hardware, like NVMe [nonvolatile memory express], and continue to move to more of a software setup. But being in the enterprise, we aren’t trying to be an all software-defined storage. To address the super-high-performance environments, you have to integrate hardware and software to work tightly together. Loose coupling works fine, but we want to beat everybody on performance. We’re always going to have a close tie to hardware.

What’s your strategy to help Violin avoid the pitfalls that sidetracked it before?

One of the things that will make us successful is improving our focus on ultra-high-performance workloads. People categorize the market now [as] … all-flash arrays, but that’s really not a market.
Mark LewisCEO, Violin Systems

Lewis: One of the things that will make us successful is improving our focus on ultra-high-performance workloads. People categorize the market now in the general term of all-flash arrays, but that’s really not a market. It’s something that can be counted, but all-flash arrays don’t make up a buying choice in any particular way. We believe there’s a lot more depth to it than that.

The total addressable market for all-flash arrays last year was about $8 billion. Violin isn’t even a drop in that bucket. We’re effectively a startup. Violin is known for incredible IP around performance. We’re focusing just on the segment of the market that needs extreme storage performance. We’re not going after the generic market to compete with Pure Storage, NetApp, Dell EMC or others. We’d be very happy to carve up a billion dollars of the high-performance portion of the market.

Are investors asking about when you’ll be in the black? Do you have a projected timeline for turning a profit with the Violin Flash Storage platform?

Lewis: We’re taking a conservative, cash-oriented approach. We aren’t going to focus on profitability now, but on generating positive cash flow. That has to come first as a way to sustain the business. I believe we’re about two years away from being able to do that. But we won’t have to be a billion-dollar company to have positive cash flow. Thankfully, we have IP that keeps our costs and margins in good shape, and we’re able to exploit flash for performance pretty cost effectively.

Other vendors are trying to play a game of replace an old disk array with flash. It’s a big market, but the negative is a race to the bottom on cost. We’re attacking the market from the top down and have a lot of runway to [achieve] profitable growth, without worrying about cannibalization.

What is Violin’s roadmap for supporting NVMe flash?

Lewis: We will release NVMe over Fabrics [products] this year. The front-end NVMe [support] is important to our customers and will give us improved performance, even above what we can deliver now.

On the device side, it’s a different story. We never used SAS or SATA drives. We created our own controller chip that’s actually faster than NVMe today. Because of our hardware-software integration, we’re working with the NVMe device suppliers to add the software hooks that we need to make NVMe fast enough to go in our Violin systems. We want to embrace that commodity hardware, but we need software improvements to do it. NVMe, as it is now, is actually too slow for us. We have to help NVMe device makers to get caught up with us.

For storage class memory, we’re investigating 3D XPoint and other RAM technologies. Our strategy is to give our customers the option to use those technologies as soon as they become commercially viable.