Tag Archives: systems

GPU-buffed servers advance Cisco’s AI agenda

Cisco Systems is the latest hardware vendor to offer gear tuned for AI and machine learning-based workloads.

Competition to support AI and machine workloads continues to heat up. Earlier this year archrivals Dell Technologies Inc., Hewlett Packard Enterprise and IBM rolled out servers designed to optimize performance of AI and machine learning workloads. Many smaller vendors are chasing this market as well.

“This is going to be a highly competitive field going forward with everyone having their own solution,” said Jean Bozman, vice president and principal analyst at Hurwitz & Associates. “IT organizations will have to figure out, with the help of third-party organizations, how to best take advantage of these new technologies.”

Cisco AI plan taps Nvidia GPUs

The Cisco UCS C480 ML M5 rack server, the company’s first tuned to run AI workloads, contains Nvidia Tesla V100 Tensor Core GPUs and NVLink to boost performance, and works with neural networks and large data sets to train computers to carry out complex tasks, according to the company. It works with Cisco Intersight, introduced last year, which allows IT professionals to automate policies and operations across their infrastructure from the cloud.

This Cisco AI server will ship sometime during this year’s fourth quarter. Cisco Services will offer technical support for a range of AI and machine learning capabilities.

Cisco intends to target several different industries with the new system. Financial services companies can use it to detect fraud and algorithmic trading, while healthcare companies can enlist it to deliver insights and diagnostics, improve medical image classification and speed drug discovery and research.

Server hardware makers place bets on AI

The market for AI and machine learning, particularly the former, represents a rich opportunity for systems vendors over the next year or two. Only 4% of CIOs said they have implemented AI projects, according to a Gartner study earlier this year. However, some 46% have blueprints in place to implement such projects, and many of them have kicked off pilot programs.

[AI and machine learning-based servers are] going to be a highly competitive field going forward with everyone having their own solution.
Jean Bozmanvice president and principal analyst, Hurwitz & Associates

AI and machine learning offers IT shops more efficient ways to address complex issues, but will significantly affect their underlying infrastructure and processes. Larger IT shops must heavily invest in training and the education of existing employees in how to use the technologies, the Gartner report stated. They also must upgrade existing infrastructure before they deploy production-ready AI and machine learning workloads. Enterprises will need to retool infrastructure to find ways to more efficiently handle data.

“All vendors will have the same story about data being your most valuable asset and how they can handle it efficiently,” Bozman said. “But to get at [the data] you first have to break down the data silos, label the data to get at it efficiently, and add data protection.”

Only after this prep work can IT shops take full advantage of AI-powered hardware-software tools.

“No matter how easy some of these vendors say it is to implement their integrated solutions, IT [shops] have more than a little homework to do to make it all work,” one industry analyst said. “Then you are ready to get the best results from any AI-based data analytics.”

Mist automates WLAN monitoring with new AI features

Mist Systems announced this week that its Marvis virtual network assistant now understands how to respond to hundreds of inquiries related to wireless LAN performance. And, in some cases, it can detect anomalies in those networks before they cause problems for end users.

IT administrators can ask Marvis questions about the performance of wireless networks — and the devices connected to it — using natural language commands, such as, “What’s wrong with John’s laptop?” The vendor said the technology helps customers identify client-level problems, rather than just network-wide trends.

Marvis could only handle roughly a dozen basic questions at launch in February. But Mist’s machine learning platform has used data from customers that have started using the product to improve Marvis’ natural language processing (NLP) skills for WLAN monitoring. Marvis can now field hundreds of queries, with less specificity required in asking each question.

Mist also announced an anomaly detection feature for Marvis that uses deep learning to determine when a wireless network is starting to behave abnormally, potentially flagging issues before they happen. Using the product’s APIs, IT departments can integrate Marvis with their help desk software to set up automatic alerts.

Mist has a robust platform for network management, and the advancements announced this week represent “solid steps forward for the company and the industry,” said Brandon Butler, analyst at IDC.

Cisco and Aruba Networks, a subsidiary of Hewlett Packard Enterprise, have also been investing in new technologies for automated WLAN monitoring and management, Butler said.

“Mist has taken a unique approach in the market with its focusing on NLP capabilities to provide users an intuitive way of interfacing with the management platform,” Butler said. “It is one of many companies … that are building up their anomaly detection and auto-remediation capabilities using machine learning capabilities.”

Applying AI to radio resource management

The original promise of radio resource management (RRM), which has been around for 15 years, was the service would detect noise and interference in wireless networks and adjust access points and channels accordingly, said Jeff Aaron, vice president of marketing at Mist, based in Cupertino, Calif.

“The problem is it’s never really worked that way,” Aaron said. “RRM has never been real-time; it’s usually done at night, because it doesn’t really have the level of data you need to make the decision.”

Now, Mist has revamped its RRM service using AI, so it can monitor the coverage, capacity, throughput and performance of Wi-Fi networks on a per-user basis. The service makes automatic changes and quantifies what impact — positive or negative — those changes have on end users.

Mist has RRM in its flagship product for WLAN monitoring and management, Wi-Fi Assurance.

Service-level expectations for WAN performance

Mist will now let customers establish and enforce service-level expectations (SLEs) for WAN performance. The agreements will help Mist customers track the impact of latency, jitter and packet loss on end users.

The release of SLEs for the WAN comes as Mist pursues partnerships with Juniper and VMware to reduce friction between the performance and user experience of the WLAN and the WAN.

Mist also lets customers set service levels for Wi-Fi performance based on metrics that include capacity, coverage, throughput, latency, access point uptime and roaming.

Dell EMC PowerVault ME4 launched for entry-level SAN

Dell EMC this week added a new line of entry-level storage systems, extending its PowerVault line to handle SAN and direct-attached storage.

The Dell EMC PowerVault ME4 line consists of three flash-based models: the 2U ME4012 and ME4024 systems and the dense 5U ME4084 expansion enclosure. 

The PowerVault block arrays can serve as direct-attached storage with Dell EMC PowerEdge storage servers, or they can extend SAN storage to enterprise remote branch offices. The latest PowerVault scales to 336 SAS drives and 4 TB of raw storage with ME expansion shelves.

The new PowerVault block systems provide unified file storage with Dell EMC PowerVault NX Series Windows-based NAS devices.

PowerVault ME4 models start at $13,000, and Dell EMC’s auto-tiering, disaster recovery, RAID support, replication, snapshots, thin provisioning and volume copy software are standard features. Dell EMC claims an HTML5 graphical user interface enables setup within 15 minutes.

PowerVault for large and small customers

Dell’s $60 billion-plus acquisition of EMC in 2016 created wide industry speculation that the combined Dell EMC would need to winnow its overlapping midrange storage portfolio.

Last week, Dell’s vice chairman of products and operations, Jeff Clarke, said the midrange Unity and SC Series platforms would converge in 2019.  But the vendor will still have a variety of storage array platforms. Dell EMC PowerMax — formerly VMAX — is the vendor’s flagship all-flash SAN. Dell EMC also sells XtremIO all-flash and Isilon clusterd NAS systems.

EMC was the external storage market share leader before the Dell acquisition. Post-merger Dell generated more than double the revenue of any other external storage vendor in the second quarter of 2018, according to IDC’s Worldwide Quarterly Enterprise Storage Systems Tracker numbers released last week.

IDC credited Dell with $1.9 billion in storage revenue in the quarter — more than double the $830 million for No. 2 NetApp. Dell had 29.2% of the market and grew 18.4% year over year for the quarter, compared with the overall industry growth of 14.4%, according to IDC.

Dell EMC PowerVault ME4084 5U expansion
Dell EMC’s extended PowerVault family includes the ME4084 5U expansion enclosure.

Dell initially launched PowerVault for archiving and backup, but repositioned it as “cheap and deep” block storage in back of the EqualLogic-based SC SANs.

Sean Kinney, a senior director of product marketing for Dell EMC midrange storage, said PowerVault ME doubles back-end performance with 12 Gbps SAS and is capable of handling 320,000 IOPS.

“We’ve talked over the past few months about how we’re going to simplify our [midrange] portfolio and align it under a couple of key platforms. We have the PowerMax at the high end. This is the next phase in that journey,” Kinney said.

The new PowerEdge arrays take self-encrypting nearline SAS disks or 3.5-inch SAS-connected SSDs, and they can be combined behind a single ME4 RAID controller. The configuration gives customers the option to configure PowerVault as all-flash or hybrid storage. The base ME4012 and ME4024 2U units come with dual controllers, with 8 GB per controller, and four ports for 10 GB iSCSI, 12 Gbps SAS and 16 Gbps Fibre Channel connectivity.

Customers could add a 5U ME484 expansion enclosure behind any ME4 base unit to scale Dell EMC PowerVault to 336 nearline disks or SSDs. Dell EMC claimed it has sold more than 400,000 units of PowerVault generations.

Enterprises use PowerVault arrays “by the hundreds” at remote branch sites, while smaller organizations make up a big share of the installed base, said Bob Fine, a director of marketing for Dell EMC midrange storage.

“If you only have one or two IT generalists, PowerVault could be your entire data center,” Fine said.

How bias in AI happens — and what IT pros can do about it

Artificial intelligence systems are getting better and smarter, but are they ready to make impartial predictions, recommendations or decisions for us? Not quite, Gartner research vice president Darin Stewart said at the 2018 Gartner Catalyst event in San Diego.

Just like in our society, bias in AI is ubiquitous, Stewart said. These AI biases tend to arise from the priorities that the developer and the designer set when developing the algorithm and training the model.

Direct bias in AI arises when the model makes predictions, recommendations and decisions based on sensitive or prohibited attributes — aspects like race, gender, sexual orientation and religion. Fortunately, with the right tools and processes in place, direct bias can be “pretty easy to detect and prevent,” Stewart said.

According to Stewart, preventing bias requires situational testing on the inputs, turning off each of the sensitive attributes as you’re training the model and then measuring the impact on the output. The problem is that one of machine learning’s fundamental characteristics is to compensate for missing data. Therefore, nonsensitive attributes that are strongly correlated with the sensitive attributes are going to be weighted more strongly to compensate. This introduces — or at least reinforces — indirect bias in AI systems.

AI bias in criminal sentencing

A distressing real-life example of this indirect bias reinforcement is in criminal justice, as an AI sentencing solution called Compas is currently being used in several U.S. states, Stewart said. The system takes a profile of a defendant and generates a risk score based on how likely a defendant is to reoffend and be considered a risk to the community. Judges then take these risk scores into account when sentencing.

A study looked at several thousand different verdicts associated with the AI system and found that African-Americans were 77% more likely than white defendants to be incorrectly classified as high risk. Conversely, white defendants were 40% more likely to be misclassified as low risk, then go on to reoffend.

Even though it is not part of the underlying data set, Compas’ predictions are highly correlated with race because more weight is given to related nonsensitive attributes like geography and education level.

If you omit all of the sensitive attributes, yes, you’re eliminating direct bias, but you’re reintroducing and reinforcing indirect bias.
Darin Stewartresearch vice president, Gartner

“You’re kind of in a Catch 22,” Stewart said. “If you omit all of the sensitive attributes, yes, you’re eliminating direct bias, but you’re reintroducing and reinforcing indirect bias. And if you have separate classifiers for each of the sensitive attributes, then you’re reintroducing direct bias.”

One of the best ways IT pros can combat this, Stewart said, is to determine at the outset what the threshold of acceptable differentiation should be and then measure each value against it. If it exceeds your threshold, it’s excluded from the model. If it’s under the limit, it’s included in the model.

“You should use those thresholds, those measures of fairness, as constraints on the training process itself,” Stewart said.

If you are creating an AI system that is going to “materially impact someone’s life,” you also need to have a human in the loop who understands why decisions are being made, he added.

Context is key

Stewart also warned IT practitioners to be wary when training an AI system on historical records. AI systems are optimized to match previous decisions — and previous biases. He points to the racist practice of “redlining” in Portland, Ore., — which was legal in the city from 1856 until 1990 — that prevented people of color from purchasing homes in certain neighborhoods for decades. AI systems used in real estate could potentially reinstate this practice, Stewart said.

“Even though the laws change and those bias practices are no longer allowed, there’s 144 years of precedent data and a lot of financial activity-based management solutions are trained on those historical records,” Stewart said.

To avoid perpetuating that type of bias in AI, Stewart said it’s critical that IT pros pay close attention to the context surrounding their training data.

“This goes beyond basic data hygiene,” Stewart said. “You’re not just looking for corrupted and duplicate values, you’re looking for patterns. You’re looking for context.”

If IT pros are using unstructured data, text analytics is their best friend, Stewart said. It can help them uncover patterns they wouldn’t find otherwise. Ideally, IT pros will also have a master list of “don’t-go-there” items they check against when searching for bias.

“Develop a list of suspect results so that if something unusual started popping out of the model, it would be a red flag that needs further investigation,” Stewart said.

Intentionally inserting bias in AI

Is there ever a case where IT pros would want to inject bias into an AI system? With all the talk about the dangers of perpetuating AI bias, it may seem odd to even consider the possibility. But if one is injecting that bias to correct a past inequity, Stewart’s advice was to go for it.

“That is perfectly acceptable if it is a legitimate and ethical target,” he said. “There are legitimate cases where a big disparity between two groups is the correct outcome, but if you see something that isn’t right or that isn’t reflected in the natural process, you can inject bias into the algorithm and optimize it to maximize [a certain] outcome. “

Inserting bias in AI systems could, for instance, be used to correct gender disparities in certain industries, he said. The only proviso he would put on the practice of purposefully inserting bias into an AI algorithm is to document it and be transparent about what you’re doing.

“That way, people know what’s going on inside the algorithm and if suddenly things shift to the other extreme, you know how to dial it back,” Stewart said.

Attala Systems tackles RoCE shortcomings for NVMe flash

Attala Systems is taking steps to ease the deployment of its composable storage infrastructure.

The San Jose, Calif., startup claims it has hardened Remote Direct Memory Access over Converged Ethernet (RoCE) networking to tolerate dropped packets on lossy leaf-spine networks. Attala said the technique enables RoCE-based nonvolatile memory express (NVMe) flash storage to span clusters of bare-metal racks.

Attala customers also are testing the company’s multi-tenant hot data lake software that lets disparate workload clusters access shared directories and immutable files, Attala said. That software is scheduled for general availability later in 2018.

Attala and Super Micro Computer have teamed to launch the 1U Intelligent Storage Node on the Enterprise & Data Center SSD Form Factor, an emerging standard for Intel “ruler” SSDs. The flash systems are available directly from Attala now, with a Super Micro SKU to follow.

Attala’s high-performance composable infrastructure is serverless hardware. Compute, networking and storage reside on Attala custom field-programmable gate arrays (FPGAs), designed on a chipset by Intel-owned Altera. Hyperscale providers and companies with large private clouds are the primary target for the novel architecture, in which NVMe SSDs can be mapped to an individual application within a cluster. Attala started hardware shipments in August.

Attala Systems NVMe JBOF
Attala’s NVMe JBOF (just a bunch of flash)

Data loss and retransmission

RoCE-based networking typically gets configured as one switch within a rack. This is largely due to the technological challenge of configuring a multiswitch lossy environment so it behaves as a lossless network. Ordinarily, leaf-spine networks lack a method of flow control to recover data packets lost in transit.

RDMA originated as the transport layer for InfiniBand. Later versions of RDMA technology were adapted for Ethernet networking.

CEO Taufik Ma said Attala Systems added error recovery to RoCE that enables data centers to use standard layer 2 NICs on a lossy network. Attala engineers contributed code to harden the open source Soft-RoCE driver in an effort to “unshackle” NVMe over RoCE from a single server.

“We went in and patched some of the outstanding issues in Soft-RoCE to ease deployment for customers. All they need to do is plug in our FPGA-based storage node and download some upstream Soft-RoCE driver software on the host,” Ma said.

Attala’s patch is part of the upstream kernel and is expected to work its way into Linux distributions, Ma said.

“There is always going to be a very low rate of packet loss when you’re going across multiple racks. What Attala has done is plug in to Soft-RoCE and found a way to detect packet loss and retransmit it,” said Howard Marks, the founder and chief scientist at analyst firm DeepStorage.

Composable infrastructure is a method to pool racks of IT hardware components. Compute, networking and storage are treated as separate resources that applications consume as needed. The objective is to avoid overprovisioning and unused capacity.

Composability has similarities but also key differences to converged infrastructure and hyper-convergence. Major storage vendors that have released composable infrastructure include Dell EMC, Hewlett Packard Enterprise and Western Digital Corp.

The challenge for Attala Systems is twofold: convincing the market that FPGAs represent the next wave in disaggregated architecture and gaining a foothold within major cloud data centers.

Other challengers are coming out with products that integrate NVMe over Fabrics as a layer of silicon. Kazan Networks has an ASIC-based bridge for insertion on an I/O board, while Solarflare Communications and stealth vendor Lightbits Labs are pushing NVME over TCP as a transport mechanism. Network specialist Mellanox Technologies is getting in on the action as well, developing a smart network interface controller with ARM cores to handle RAID and deduplication for just a bunch of flash (JBOF) arrays.

Marks said the market hasn’t matured for composable storage products to determine which design will prevail.

“NVMe over Fabrics isn’t deployed yet at large scale. When it starts to threaten SCSI, then people are going to switch whole hog to NVMe. By that time, enterprises will go with NVMe over lossless Fibre Channel, and hyperscalers will go with NVMe over TCP,” Marks said.

Multi-tenant data lake use cases include AI

Ma said the Attala Data Lake software can be used as a stand-alone repository or an add-on module to extend an existing data lake. Intended use cases include write once, read many files shared by different application clusters, including AI training data, network and transaction logs.

The hot data is stored in Attala Systems’ scale-out storage nodes. Requested data is integrated directly in an application’s native file system.

The Attala-Super Micro EDSFF JBOF uses a cut-through chassis with four 50 Gigbabit Ethernet ports. The box accepts 32 Intel ruler SSDs. Users can slice up the capacity in increments and assign storage to different namespaces. Some of the capacity needs to be reserved for third-party RAID and data services. Attala said a sample four-node configuration provides up to 4 petabytes of raw storage, 16,384 volumes and up to 22 million IOPS.

Igneous Systems rocks unstructured data protection for NAS

Igneous Systems Inc. is getting attached to NAS.

The unstructured data protection specialist this week expanded integrations with providers of network-attached storage. Igneous Systems now provides multiprotocol support for Dell EMC Isilon OneFS, direct integration with Qumulo File Fabric (QF2) and Pure Storage FlashBlade object storage support.

Igneous backs up data to its fully managed appliance that runs in the customer’s data center. Customers can also tier data up to the Amazon, Microsoft and Google public clouds.

Protecting unstructured data at scale is important at a time when data sets are no longer tidy and living in one place, said Allison Armstrong, vice president of marketing at Igneous Systems.

Igneous seeks simplified backup

Igneous claims it provides the only multiprotocol support for Isilon OneFS. Organizations that use SMB and NFS at the same time can protect all that data and retain both permission sets through one backup. The support provides an alternative to replication for data protection.

The Igneous API-based integration with Qumulo also offers an alternative to replication. Igneous Systems enables backup and archive for QF2 clusters.

“Customers now have the ability to run a very simple backup routine,” Armstrong said.

Igneous Systems' Dell EMC Isilon protection
Igneous Systems offers multiprotocol data protection support for Dell EMC Isilon OneFS.

In addition, the support means Qumulo customers do not have to implement Network Data Management Protocol, a dated way of protecting NAS filers.

Igneous previously provided API-based integration to back up NFS data on Pure and is expanding that support to include FlashBlade object storage. The protection includes data movement at scale and features automated provisioning and snapshot integration.

We’re optimized for unstructured data.
Allison Armstrongvice president of marketing, Igneous Systems

“This is a way to ensure that modern data workflows are protected,” Armstrong said.

Igneous Systems customers do not have to download or install any additional software to receive the updates. Igneous data protection products are available through its channel partners.

Igneous delivers its protection as a service, unified and with massive scale, said Christophe Bertrand, senior analyst at Enterprise Strategy Group. Bertrand said he thinks there’s an interest for the protection that Igneous provides and he’s curious to see how the business will evolve.

“They’re doing it for a variety of platforms,” Bertrand said. “I think they’re solving a problem for a number of organizations. But … the market will decide.”

Focusing on unstructured data

Launched in 2016, Igneous has a customer base in the double digits, Armstrong said. Two main targets are large enterprises and data-centric organizations such as life sciences businesses. Earlier this year, CEO and Founder Kiran Bhageshpur said he expected to see a significant increase in the customer count by the end of 2018.

Igneous closed a $15 million Series B funding round in January, geared toward helping expand its workforce and marketing.

Armstrong said Igneous Systems runs into secondary storage startups Cohesity and Rubrik in the marketplace, but they don’t have as much scale and are more focused on structured data.

“We’re optimized for unstructured data,” Armstrong said.

ICS security fails the Black Hat test

The news at Black Hat 2018 wasn’t great when it came to industrial control systems. But while numerous sessions added up to sweeping condemnation of ICS security, there was at least the occasional saving grace that some vendors will correct some problems — at least some of the time. Still, the apparent lack of a security-conscious culture within these organizations means they’ll only fix the minimum, leaving similar products with the same underlying hardware, firmware and fatal bugs untouched and unsecured.

Speaking in a session, called “Breaking the IIoT: Hacking Industrial Control Gateways,” Thomas Roth, security researcher and founder of Leveldown Security, an embedded and ICS security consulting and research company based in Esslingen, Germany, walked through the security faults of a series of five gateway devices he’d found at prices he could afford on eBay. He wanted to look at commonly deployed, relatively current devices — things you find in the real world.

“If you go out on the network and start scanning, you’ll find thousands of these devices. In fact, you’ll find entire network ranges that are used almost exclusively for these devices,” he said.

“Often, they use static IP addresses with no VPN protection.” One device he looked at had a proprietary protocol for its wireless communications. But if you could break it — and he did — you had access to every one of those devices in the field, because the network addressing architecture was flat and unsegmented.

The first device he looked at was typical of his various experiments, tackling a Moxa W2150A which connects ICS devices to wireless networks via an Ethernet port on the device side and a wireless interface on the other side. In between the two interfaces is an easily opened case that reveals a circuit board with pads for connecting to a debugging port. Roth discovered, in a common theme across many of the devices discussed at the conference, the port was a serial terminal connection that booted directly to a root shell in Linux.

“This is a design decision, not a bug,” Roth said. But he noted that if you have the device and you can access a root shell, then as you are writing exploits, you can debug them directly on the device, “which is a pretty nice situation to be in.”

Roth noted the firmware for the device was available on the internet from the Moxa website, but it was encrypted. At first, this seemed like a dead end. But in looking at earlier firmware versions, he noticed one of the upgrades included adding the feature of encrypting the firmware.

This led him to an unencrypted update version, which included a package called “upgrade_firmware.” This, in turn, led to a function called “firmware_decrypt” — a function name that gave the audience a chuckle — which gave him plaintext access to the current version of the software. The decryption key was, needless to say, included in the upgrade code.

Roth raised an issue that hasn’t been much discussed in ICS security: supply chain security issues caused by the wide prevalence of openly accessible terminal access ports on devices. You can change the firmware, he said, write the changed version back to the device, return it to your distributor without mentioning the change, “and they will happily resell it to someone else.” In fact, he knows this because he conducted an experiment and was sold a device with firmware he had previously rewritten.

Roth discussed four more devices in some detail, with two of them still in the process of disclosure, “and there are a lot of fun issues.”

Beyond Roth’s pathway strewn with pwned gateways, there were other such sessions, including ones that found significant vulnerabilities in medical devices, cellular gateways, smart city infrastructure and satellite communications.

Jonathan Butts, CEO of security consultancy QED Secure Solutions, located in Coppell, Texas, noted in a press conference at the event that dealing with vendors around ICS security disclosure had been particularly frustrating. In the case of a pacemaker made by Medtronic, a protracted process leading to the company deciding that changes in the product weren’t necessary led Butts and co-speaker Billy Rios, founder of WhiteScope LLC, a cybersecurity company based in Half Moon Bay, Calif., to demonstrate their attack live and let the audience judge for themselves.

“To be honest,” Butts said, “after about the one-and-a-half-year mark, and you see stuff like [Medtronic’s response], you get fed up.”

ICS security: Protection? Not

While it’s theoretically possible to protect at least the devices that aren’t implanted in human bodies by placing the ICS equivalents of a firewall at strategic network junction points, a session by Airbus security evaluators Julien Lenoir and Benoit Camredon showed a widely deployed ICS firewall made by Belden could be remotely exploited.

The Tofino Xenon device is typically situated between the IP-based control network and local ICS assets that use Modbus, EtherNet/IP or OPC protocols. Interestingly, the device itself doesn’t have an IP address; it is essentially invisible to ordinary interrogation on the network.

A custom protocol allows a Windows machine running a configurator to discover and then send configuration data to a Xenon device. The configurator knows the addresses of protected ICS devices and knows the Xenon is somewhere between the configurator and the devices. The Xenon knows to watch for packets that carry a specific payload and recognizes them as packets from a configurator.

The two researchers were able to reverse-engineer the protocol enough to understand the arrangement that was used for encryption keys. The configurator discovers devices using a common key and then generates two additional keys that are unique to the particular pairing of that configurator and that specific firewall. All of these keys could be extracted from the discovery session, and then the keys unique to the device were used to establish a connection with the device.

“We were able to get a root shell,” Lenoir told the audience, heralding the familiar theme that almost all ICS devices are actually outdated Linux kernels. “Once everything was running as root, now the appliance was no longer a black box, but was instead a Linux kernel.”

From here, they settled on an attack model that used the devices’ ability to be updated from files on a USB stick. Camredon explained the updates comprised two files, both encrypted. “One is an update script, and one is a data file that is an image, including an image of the kernel.”

It turned out that all configurators and all Tofino Xenon devices used the same key for decrypting the update files. Because they had access to root on the Xenon, they were able to extract this key, at which point they further discovered there were no checks in the update script to ensure the data file hadn’t been tampered with since it was created.

Thus, a breached Xenon could be modified in whatever way the attackers wanted, an image of that system made, and the image could be encrypted and included in an update package without the separate installation script detecting the change.

The Xenon has been updated to correct these problems since the researchers disclosed their findings. So, in theory, the firewall is back in business. One problem Roth noted, though, is these systems often come in dozens of variants, with different names and model numbers.

“If you report a bug to some of these vendors,” Roth said, “the vulnerability gets fixed, but then there are 10 different devices which run the same firmware, and they are left completely unpatched.”

Roth suggested this was a clear indication of the lack of security culture at many ICS vendors.

“It’s like exploiting in the ’90s,” he concluded. “We have no integrity protections on any of these devices.”

At another moment, he made a sweeping generalization: “Everything runs as root; everything runs on outdated Linux kernels; everything runs on outdated web servers. If any of these components fails, you have root permission.”

Irregularities discovered in WinVote voting machines

LAS VEGAS — The insecurity of electronic voting systems has been well-documented, but so far there has been no concrete evidence that those systems have been hacked in the field. However, a forensic analysis by security researcher Carsten Schuermann discovered irregularities in eight WinVote voting machines used in Virginia elections for more than a decade.

Speaking at Black Hat 2018, Schuermann, associate professor at IT University of Copenhagen, presented data that showed voting machine irregularities in WinVote systems used in a variety of state and federal elections from 2004 to 2014. In his session, titled “Lessons from Virginia – A Comparative Forensic Analysis of WinVote Voting Machines,” Schuermann also pushed for mandated paper ballots and regular audits to mitigate potential threats.

“When you add technology to the voting process, you clearly increase its attack surface,” Schuermann said.

Schuermann noted that there are actually two problems with insecure voting machines. The first is obvious — the systems can be easily hacked.

“That’s a real threat,” he said. “But the other threat is equally important and equally dangerous, and that is the threat of an alleged

— when people claim there was a

when there actually wasn’t.”

Such allegations can disrupt elections and damage the credibility of voting results. And since too many voting machines don’t produce paper trails, he said, those allegations can be as damaging as a real


Schuermann had such a voting machine with him on stage — a decommissioned WinVote system that had a printer but only printed vote tallies and not individual ballots. He said he obtained eight WinVote voting machines from an unnamed source two years ago, and first hacked into one of the machines for a DEFCON Voting Village session last year.

Schuermann followed up with a deeper forensic analysis that uncovered concerning voting machine irregularities as well as serious vulnerabilities. He told the audience that while he had access to the machines’ SSDs, he did not have any access to memory or memory dumps, security logs or a record of wireless connections.

But what data was available showed a number of holes that hackers could exploit, including open ports (135, 139, 445 and 3387, among others) and unpatched versions of Windows XP Embedded from 2002 that were vulnerable to a critical buffer overflow attack, CVE-2003-0352.

“Another problem is that this machine has wireless turned on all the time,” Schuermann said, adding that the wireless password for the systems was “ABCDE.” “That’s not a very secure password.”

I have only one conclusion, and that is, use paper and do your audits.
Carsten Schuermannassociate professor, IT University of Copenhagen

Those vulnerabilities in themselves didn’t prove the machines had been hacked, but a closer examination of files on some of the WinVote voting machines showed unexplained anomalies. One of the machines, for example, had MP3s of a Chinese pop song and traces of CD ripping software, and data showed the machine broadcast the song on the internet. That was strange, he said, but there were more concerning voting machine irregularities.

For example, three of the machines used during the 2005 Virginia gubernatorial election dialed out via their modems on Election Day, though the data didn’t explain why. Schuermann speculated that perhaps the systems were getting a security update, but one of the machines actually dialed the wrong number.

In addition, two of the systems that were used in the 2013 Virginia state elections had more than 60 files modified on Election Day before the polls closed. In addition, USB devices connected to one of the machines while the polls were open.

“That’s really bizarre,” he said.

It was unclear whether the files were modified as part of a system update, he said, and there wasn’t enough data to explain what those USB connections were for. Schuermann cautioned the audience that the voting machine irregularities weren’t necessarily evidence of hacking, but he said the uncertainty about the irregularities should serve as a call to action. Only a few states, he said, have electronic voting systems that produce paper ballots and can be audited.

“I have only one conclusion,” he said. “And that is, use paper and do your audits.”

SIEM benefits include efficient incident response, compliance

Security information and event management systems collect security log events from numerous hosts within an enterprise and store their relevant data centrally. By bringing this log data together, these SIEM products enable centralized analysis and reporting on an organization’s security events.

SIEM benefits include detecting attacks that other systems missed. Some SIEM tools also attempt to stop attacks — assuming the attacks are still in progress.

SIEM products have been available for many years, but initial security information and event management (SIEM) tools were targeted at large organizations with sophisticated security capabilities and ample security analyst staffing. It is only relatively recently that SIEM systems have emerged that are well-suited to meet the needs of small and medium-sized organizations.

SIEM architectures available today include SIEM software installed on a local server, a local hardware or virtual appliance dedicated to SIEM, and a public cloud-based SIEM service.

Different organizations use SIEM systems for different purposes, so SIEM benefits vary across organizations. This article looks at the three top SIEM benefits, which are:

  • streamlining compliance reporting;
  • detecting incidents that would otherwise not be detected; and
  • improving the efficiency of incident handling

1. Streamline compliance reporting

Many organizations deploy the tools for these SIEM benefits alone, including streamlining enterprise compliance reporting efforts through a centralized logging solution. Each host that needs to have its logged security events included in reporting regularly transfers its log data to a SIEM server. A single SIEM server receives log data from many hosts and can generate one report that addresses all of the relevant logged security events among these hosts.

An organization without a SIEM system is unlikely to have robust centralized logging capabilities that can create rich customized reports, such as those necessary for most compliance reporting efforts. In such an environment, it may be necessary to generate individual reports for each host or to manually retrieve data from each host periodically and reassemble it at a centralized point to generate a single report.

Many organizations deploy the tools for these SIEM benefits alone, including streamlining enterprise compliance reporting efforts through a centralized logging solution.

The latter can be incredibly difficult, in no small part because different operating systems, applications and other pieces of software are likely to log their security events in various proprietary ways, making correlation a challenge. Converting all of this information into a single format may require extensive code development and customization.

Another reason why SIEM tools are so useful is that they often have built-in support for most common compliance efforts. Their reporting capabilities are compliant with the requirements mandated by standards such as the Health Insurance Portability and Accountability Act (HIPAA), the Payment Card Industry Data Security Standard (PCI DSS) and the Sarbanes-Oxley Act.

By using SIEM logs, an organization can save considerable time and resources when meeting its security compliance reporting requirements, especially if it is subject to more than one such compliance initiative.

2. Detect the undetected

SIEM systems are able to detect otherwise undetected incidents.

Many hosts that log security breaches do not have built-in incident detection capabilities. Although these hosts can observe events and generate audit log entries for them, they lack the ability to analyze the log entries to identify signs of malicious activity. At best, these hosts, such as end-user laptops and desktops, might be able to alert someone when a particular type of event occurs.

SIEM tools offer increased detection capabilities by correlating events across hosts. By gathering events from hosts across the enterprise, a SIEM system can see attacks that have different parts on different hosts and then reconstruct the series of events to determine what the nature of the attack was and whether or not it succeeded.

In other words, while a network intrusion prevention system might see part of an attack and a laptop’s operating system might see another part of the attack, a SIEM system can correlate the log data for all of these events. A SIEM tool can determine if, for example, a laptop was infected with malware which then caused it to join a botnet and start attacking other hosts.

It is important to understand that while SIEM tools have many benefits, they should not replace enterprise security controls for attack detection, such as intrusion prevention systems, firewalls and antivirus technologies. A SIEM tool on its own is useless because it has no ability to monitor raw security events as they happen throughout the enterprise in real time. SIEM systems use log data as recorded by other software.

Many SIEM products also have the ability to stop attacks while they are still in progress. The SIEM tool itself doesn’t directly stop an attack; rather, it communicates with other enterprise security controls, such as firewalls, and directs them to block the malicious activity. This incident response capability enables the SIEM system to prevent security breaches that other systems might not have noticed elsewhere in the enterprise.

To take this a step further, an organization can choose to have its SIEM tool ingest threat intelligence data from trusted external sources. If the SIEM tool detects any activity involving known malicious hosts, it can then terminate those connections or otherwise disrupt the malicious hosts’ interactions with the organization’s hosts. This surpasses detection and enters the realm of prevention.

3. Improve the efficiency of incident handling activities

Another of the many SIEM benefits is that SIEM tools significantly increase the efficiency of incident handling, which in turn saves time and resources for incident handlers. More efficient incident handling ultimately speeds incident containment, thus reducing the amount of damage that many security breaches and incidents cause.

A SIEM tool can improve efficiency primarily by providing a single interface to view all the security log data from many hosts. Examples of how this can expedite incident handling include:

  • it enables an incident handler to quickly identify an attack’s route through the enterprise;
  • it enables rapid identification of all the hosts that were affected by a particular attack; and
  • it provides automated mechanisms to stop attacks that are still in progress and to contain compromised hosts.

The benefits of SIEM products make them a necessity

The benefits of SIEM tools enable an organization to get a big-picture view of its security events throughout the enterprise. By bringing together security log data from enterprise security controls, host operating systems, applications and other software components, a SIEM tool can analyze large volumes of security log data to identify attacks, security threats and compromises. This correlation enables the SIEM tool to identify malicious activity that no other single host could because the SIEM tool is the only security control with true enterprise-wide visibility.      

Businesses turn to SIEM tools, meanwhile, for a few different purposes. One of the most common SIEM benefits is streamlined reporting for security compliance initiatives — such as HIPAA, PCI DSS and Sarbanes-Oxley — by centralizing the log data and providing built-in support to meet the reporting requirements of each initiative.

Another common use for SIEM tools is detecting incidents that would otherwise be missed and, when possible, automatically stopping attacks that are in progress to limit the damage.

Finally, SIEM products can also be invaluable to improve the efficiency of incident handling activities, both by reducing resource utilization and allowing real-time incident response, which also helps to limit the damage.

Today’s SIEM tools are available for a variety of architectures, including public cloud-based services, which makes them suitable for use in organizations of all sizes. Considering their support for automating compliance reporting, incident detection and incident handling activities, SIEM tools have become a necessity for virtually every organization.

BGP hijacking attacks target payment systems

Researchers discovered BGP hijacking attacks targeting payment processing systems and using new tricks to maximize the attackers hold on DNS servers.

Doug Madory, director of internet analysis at Oracle Dyn, previously saw border gateway protocol (BGP) hijacking attacks in April 2018 and has seen them continue through July. The first attack targeted an Amazon DNS server in order to lure victims to a malicious site and steal cryptocurrency, but more recent attacks targeted a wider range of U.S. payment services.

“As in the Amazon case, these more recent BGP hijacks enabled imposter DNS servers to return forged DNS responses, misdirecting unsuspecting users to malicious sites.  By using long TTL values in the forged responses, recursive DNS servers held these bogus DNS entries in their caches long after the BGP hijack had disappeared — maximizing the duration of the attack,” Madory wrote in a blog post. “The normal TTL for the targeted domains was 10 minutes (600 seconds).  By configuring a very long TTL, the forged record could persist in the DNS caching layer for an extended period of time, long after the BGP hijack had stopped.”

Madory detailed attacks on telecom companies in Indonesia and Malaysia as well as BGP hijacking attacks on U.S. credit card and payment processing services, the latter of which lasted anywhere from a few minutes to almost three hours. While the payment services attacks featured similar techniques to the Amazon DNS server attack, it’s unclear if the same threat actors are behind them.

Justin Jett, director of audit and compliance for Plixer, said BGP hijacking attacks are “extremely dangerous because they don’t require the attacker to break into the machines of those they want to steal from.”

“Instead, they poison the DNS cache at the resolver level, which can then be used to deceive the users. When a DNS resolver’s cache is poisoned with invalid information, it can take a long time post-attacked to clear the problem. This is because of how DNS TTL works,” Jett wrote via email. “As Oracle Dyn mentioned, the TTL of the forged response was set to about five days. This means that once the response has been cached, it will take about five days before it will even check for the updated record, and therefore is how long the problem will remain, even once the BGP hijack has been resolved.”

Madory was not optimistic about what these BGP hijacking attacks might portend because of how fundamental BGP is to the structure of the internet.

“If previous hijacks were shots across the bow, these incidents show the internet infrastructure is now taking direct hits,” Madory wrote. “Unfortunately, there is no reason not to expect to see more of these types of attacks against the internet.”

Matt Chiodi, vice president of cloud security at RedLock was equally as worried and warned that these BGP hijacking attacks should be taken as a warning.

“BGP and DNS are the silent warriors of the internet and these attacks are extremely serious because nearly all other internet services assume they are secure. Billions of users rely on these mostly invisible services to accomplish everything from Facebook to banking,” Chiodi wrote via email. “Unfortunately, mitigating BGP and DNS-based attacks is extremely difficult given the trust-based nature of both systems.”