Tag Archives: products

Dreamforce brings Salesforce products upgrades

Users can anticipate more Einstein AI features to be integrated with Salesforce products and more news about the CRM vendor’s recent acquisitions and how they will play pivotal roles in the Salesforce platform.

Salesforce is expected to unveil the Einstein and acquisition developments at Dreamforce, the company’s annual customer conference in San Francisco that attracts nearly 150,000 attendees.

Analysts said they expect substantial upgrades to core Salesforce systems and more use cases for Einstein and how recent acquisitions of CloudCraze and MuleSoft fit into the Salesforce ecosystem.

“Salesforce is trying to tell the story that they are the customer success platform for all companies, B2B, B2C and companies that operate in both industries,” said John Bruno, an analyst at Forrester.

Bruno added that he expects more keynotes than usual from companies like Adidas that show how Salesforce products allow companies to work with a variety of customers, from both the business and consumer sectors.

“I think you’ll hear a tight story around exactly how Salesforce and CloudCraze and Commerce Cloud fit for B2B and B2C companies,” Bruno said. “Is it going to be prime time ready? No, but they will target that story because Salesforce hasn’t told that story great.”

Attendees at Dreamforce 2017 in San Francisco
Users can learn about new upgrades and features for all Salesforce products at Dreamforce conference.

New Quip Slides system

Meanwhile, Salesforce said Sept. 17, a week before Dreamforce, that it will be showing at the conference PowerPoint-esque upgrade to its content collaboration platform, Quip, called Quip Slides.

Quip Slides is an AI-assisted platform to help workgroups create interactive presentations mainly for internal meetings and training. It features real-time collaboration, charting, live data, feedback prompts and engagement insights.

Another feature in Quip is Salesforce partner-built Live Apps, which enable work teams to embed Box and Dropbox files into Quip.

Integrating the Integration Cloud

The CloudCraze acquisition was just one of several the San Francisco-based CRM giant made to improve its suite of products. Salesforce spent $6.5 billion to acquire MuleSoft and build out what it’s calling the Integration Cloud.

What Salesforce is recognizing is there’s a whole different set of roles for how you manage customers now.
John Brunoanalyst, Forrester

Paul Greenberg, founder and analyst at The 56 Group, said he sees the name “Integration Cloud” as a misnomer, but that he thinks the MuleSoft purchase is a pivotal acquisition to bolster Salesforce.

“Despite its silly name as Integration Cloud, MuleSoft was a smart acquisition as it gives Salesforce access to all these different layers of service and does a lot of things Salesforce couldn’t previously do,” Greenberg said. “For integrations to succeed, it’s not just about building on the Salesforce platform. Without MuleSoft it was harder to build out integrations.”

With many organizations working to upgrade legacy systems and update their processes and provide  customers with a modern experience, the ability to connect legacy systems to current platforms is often laborious. Salesforce hopes its Integration Cloud will help ease that transition.

“We’ve ended up in a hybrid world,” said Michael Fauscette, chief research officer at G2 Crowd. “We’ve created so many data silo issues and it’s incumbent on the platform players to provide the ability to get past that.”

Continuing with business transformation

In addition to the expected unveiling of Integration Cloud and B2B commerce use cases, Salesforce is anticipated to continue its strategy of bringing together different customer-facing departments to help curate better customer experiences.

“I don’t think it’s a fully mature or fully conscious Salesforce strategy, but Salesforce is drilling down toward more personalization,” Greenberg said. “Salesforce’s Connections conference was the first step to that public story where we saw Marketing Cloud, Sales Cloud and Service Cloud becoming cross clouds in more significant ways than ever before.”

Bruno, from Forrester, agreed that organizational transformation and how Salesforce products can help is a major theme for Salesforce.

“What Salesforce is recognizing is there’s a whole different set of roles for how you manage customers now,” Bruno said. “I can see themes where [Salesforce] recognizes businesses have changed, customer engagement has changed and they are trying to provide solutions to account for that.”

More than just Salesforce products

Beyond the larger topics around its new acquisitions and customer empowerment, all of the core Salesforce products are expected to receive upgrades and users will be able to attend sessions with roadmaps outlining the future for Salesforce products.

“A core part of Dreamforce is about unveiling new innovations and it’s what customers have come to expect,” said Brigitte Donner, VP and conference chair for Dreamforce, at Salesforce. “We have more product keynotes planned than ever before.”

Donner added that the theme for Dreamforce is “change,” extending beyond just Salesforce products to larger social issues, with the first climate summit planned at Dreamforce this year, as well as Salesforce bringing back an equality summit.

Dreamforce takes place Sept. 25 to 28. Check SearchSalesforce.com for daily conference coverage.

Juniper-Ericsson partnership aimed at 5G market

Juniper Networks has partnered with Ericsson to offer carriers a collection of products for moving 4G and 5G traffic from a cell site to the network core. The deal marks an important win for Juniper, which is filling the void left by the nearly dead partnership between rival Cisco and Ericsson.

The Juniper-Ericsson alliance combines routers and software from both companies to build an optical transport for a mobile network that carriers can manage through a single software console, according to the vendors. The partners’ combined routers include Juniper’s MX and PTX series and Ericsson’s 6000 hardware.

Juniper and Ericsson have partnered on technology for almost 20 years. But the latest deal is a “significant win” for Juniper, because it improves the company’s chances of winning deals, as service providers build out their network infrastructure to deliver 5G wireless services to consumers and businesses, said Rajesh Ghai, an analyst at IDC.

For example, the partnership could provide Juniper with access to the many service providers that use Ericsson’s radio access technology to connect customers’ mobile devices to the carriers’ core networks, Ghai said. Ericsson has a 40% share of the radio access market.

Also, of the three top carrier suppliers, Ericsson is the only one without an extensive routing portfolio — a void Juniper can fill. The other two suppliers are Nokia and Huawei.

“It was critical that Juniper get aligned with Ericsson,” Ghai said. “It remains to be seen how exclusive Ericsson can keep the relationship.”

Meanwhile, Juniper’s biggest rival, Cisco, is more focused on selling its routers directly to service providers, rather than through Ericsson, Ghai said. Also, Cisco and Ericsson compete with products for the packet core, which has created “suspicion between the two partners.”

Cisco and Ericsson announced a wide-ranging partnership in 2015, but financial troubles pushed Ericsson into an extensive reorganization that prevented the company from following through on the deal. Nevertheless, Cisco has never declared the partnership dead, despite its failure to reach sales goals.

“Where we need to partner with Ericsson, we will continue to do that. And where we’re working directly with SPs [service providers], we’ll continue to do that,” said Sumeet Arora, general manager of service provider network systems at Cisco.

Juniper, Ericsson combined products for service providers

The Juniper-Ericsson partnership includes Juniper’s MX Series 5G Universal Routing Platform and its PTX Series Packet Transport Routers. The hardware supports mobile infrastructure for 10 Gb, 100 Gb and 400 Gb optical transport.

Juniper has aimed the MX at the service provider’s WAN edge, which could include routing traffic from a cell site onto the service provider’s core network. The PTX Series can handle traffic on the service provider’s backbone. Juniper has also designed the hardware to handle internet peering and data center interconnects.

Juniper’s MX and PTX routers are interoperable with Ericsson’s Router 6000 mobile backhaul and fronthaul portfolio. A wireless backhaul router connects mobile device traffic to a network node, such as the internet or a proprietary network. A fronthaul device sits at the access layer of the network and aggregates traffic from IoT devices.

Other hardware covered in the partnership includes Ericsson’s MINI-LINK microwave radio backhaul device. The partners are also offering software such as Juniper’s firewall, called the SRX Series Services Gateway, and Ericsson’s management and orchestration technology for controlling all the partners’ products.

In general, analysts do not expect service providers to take 5G infrastructure technology into production until next year, with businesses unlikely to buy 5G services until 2020 at the earliest. Industry observers expect IoT to be an initial driver of the 5G commercial market.

Panasas storage roadmap includes route to software-defined

Panasas is easy to overlook in the scale-out NAS market. The company’s products don’t carry the name recognition of Dell EMC Isilon, NetApp NAS filers and IBM Spectrum Scale. But CEO Faye Pairman said her team is content to fly below the radar — for now — concentrating mostly on high-performance computing, or HPC.

The Panasas storage flagship is the ActiveStor hybrid array with the PanFS parallel file system. The modular architecture scales performance in a linear fashion, as additional capacity is added to the system. “The bigger our solution gets, the faster we go,” Pairman said.

Panasas founder Garth Gibson launched the object-based storage architecture in 2000. Gibson, a computer science professor at Carnegie Mellon University in Pittsburgh, was a a developer of RAID storage taxonomy. He serves as Panasas’ chief scientist.

Panasas has gone through many changes over the past several years, marked by varying degrees of success to broaden into mainstream commercial NAS. That was Pairman’s charter when she took over as CEO in 2010. Key executives left in a 2016 management shuffle, and while investors have provided $155 million to Panasas since its inception, the last reported funding was a $52.5 million venture round in 2013.

As a private company, Panasas does not disclose its revenue, but “we don’t have the freedom to hemorrhage cash,” Pairman said.

We caught up with Pairman recently to discuss Panasas’ growth strategy, which could include offering a software-only license option for PanFS. She also addressed how the vendor is moving to make its software portable and why Panasas isn’t jumping on the object-storage bandwagon.

Panasas storage initially aimed for the high end of the HPC market. You were hired to increase Panasas’ presence in the commercial enterprise space. How have you been executing on that strategy?

Faye Pairman: It required looking at our parallel file system and making it more commercially ready, with features added to improve stability and make it more usable and reliable. We’ve been on that track until very recently.

We have an awesome file system that is very targeted at the midrange commercial HPC market. We sell our product as a fully integrated appliance, so our next major objective — and we announced some of this already — is to disaggregate the file system from the hardware. The reason we did that is to take advantage of commodity hardware choices on the market.

Once the file system is what we call ‘portable,’ meaning you can run it on any hardware, there will be a lot of new opportunity for us. That’s what you’ll be hearing from us in the next six months.

Would Panasas storage benefit by introducing an object storage platform, even as an archive device?

Pairman: You know, this is a question we’ve struggled with over the years. Our customers would like us to service the whole market. [Object storage] would be a very different financial profile than the markets we serve. As a small company, right now, it’s not a focus for us.

We differentiate in terms of performance and scale. Normally, what you see in scale-out NAS is that the bigger it gets, the more sluggish it tends to be. We have linear scalability, so the bigger our solution gets, the faster we go.

That’s critically important to the segments we serve. It’s different from object storage, which is all about being simple and the ability to get bigger and bigger. And performance is not a consideration.

Which vendors do you commonly face off with in deals? 

Pairman: Our primary competitor is IBM Spectrum Scale, with a file system and approach that is probably the most similar to our own and a very clear target on commercial HPC. We also run into Isilon, which plays more to commercial — meaning high reads, high usability features, but [decreased] performance at scale.

And then, at the very high end, we see DataDirect Networks (DDN) with a Lustre file system for all-out performance, but very little consideration for usability and manageability.

The niche is in the niche. We target very specific markets and very specific workloads.
Faye PairmanCEO, Panasas

Which industry verticals are prominent users of Panasas storage architecture? Are you a niche within the niche of HPC?

Pairman: The niche is in the niche. We target very specific markets and very specific workloads. We serve all kinds of application environments, where we manage very large numbers of users and very large numbers of files.

Our target markets are manufacturing, which is a real sweet spot, as well as life sciences and media and entertainment. We also have a big practice in oil and gas exploration and all kinds of scientific applications, and even some manufacturing applications within the federal government.

Panasas storage is a hybrid system, and we manage a combination of disk and flash. With every use case, while we specialize in managing very large files, we also have the ability to manage the file size that a company does on flash.

What impact could DDN’s acquisition of open source Lustre exert on the scale-out sector, in general, and Panasas in particular?

Pairman: I think it’s a potential market-changer and might benefit us, which is why we’re keeping a close eye on where Lustre ends up. We don’t compete directly with Lustre, which is more at the high end.

Until now, Lustre always sat in pretty neutral hands. It was in a peaceful place with Intel and Seagate, but they both exited the Lustre business, and Lustre ended up in DDN’s hands. It remains to be seen what that portends. But there is a long list of vendors that depend on Lustre remaining neutral, and now it’s in the hands of the most aggressive competitor in that space.

What happens to Lustre is less relevant to us if it stays the same. If it falters, we think we have an opportunity to move into that space. It’s potentially a big shakeup that could benefit vendors like us who build a proprietary file system.

Dell EMC HCI and storage cloud plans on display at VMworld

LAS VEGAS — Dell EMC launched cloud-related enhancements to its storage and hyper-converged infrastructure products today at the start of VMworld 2018.

The Dell EMC HCI and storage product launch includes a new VxRail hyper-converged appliance, which uses VMware vSAN software. The vendor also added a cloud version of the Unity midrange unified storage array and cloud enhancements to the Data Domain data deduplication platform.

Dell EMC HCI key for multi-cloud approach?

Dell EMC is also promising synchronized releases between the VxRail and the VMware vSAN software that turns the PowerEdge into an HCI system – although it could take 30 days for the “synchronization.” Still, that’s an improvement over the six months or so it now takes for the latest vSAN release to make it to VxRail.

Whether you’re protecting data or storing data, the learning curve of your operating model — regardless of whether you’re on premises or off premises — should be zero.
Sam Grocottsenior vice president of marketing, ISG, Dell EMC

Like other vendors, Dell EMC considers its HCI a key building block for private and hybrid clouds. The ability to offer private clouds with public cloud functionality is becoming an underpinning of the multi-cloud strategies at some organizations.

Sam Grocott, senior vice president of marketing for the Dell EMC infrastructure solutions group, said the strong multi-cloud flavor of the VMworld product launches reflects conversations the vendor has with its customers.

“As we talk to customers, the conversation quickly turns to what we are doing in the cloud,” Grocott said. “Customers talk about how they’re evaluating multiple cloud vendors. The reality is, they aren’t just picking one cloud, they’re picking two or even three clouds in a lot of cases. Not all your eggs will be in one basket.”

Dell EMC isn’t the only storage vendor making its storage more cloud-friendly. Its main storage rival NetApp also offers its unified primary storage and backup options that run in the cloud, and many startups focus on cloud compatibility and multi-cloud management from the start.

Grocott said Dell’s overall multi-cloud strategy is to provide a consistent operating model experience on premises, as well as in private and public clouds. That strategy covers Dell EMC and VMware products. Dell EMC VxRail is among the products that tightly integrates VMware with the vendor’s storage.

“That’s what we think is going to differentiate us from any of the competition out there,” he said. “Whether you’re protecting data or storing data, the learning curve of your operating model — regardless of whether you’re on premises or off premises — should be zero.”

Stu Miniman, a principal analyst at IT research firm Wikibon, said Dell EMC is moving toward what Wikibon calls a True Private Cloud.

Wikibon’s 2018 True Private Cloud report predicts almost all enterprise IT will move to a hybrid cloud model dominated by SaaS and true private cloud. Wikibon defines true private cloud as completely integrating all aspects of a public cloud, including a single point of contact for purchase, support, maintenance and upgrades.

“The new version of the private cloud is, let’s start with the operating model I have in the public cloud, and that’s how I should be able to consume it, bill it and manage it,” Miniman said. “It’s about the software, it’s about the usability, it’s about the management layer. Step one is to modernize the platform; step two is to modernize the apps. It’s taken a couple of years to move along that spectrum.”

SIEM evaluation criteria: Choosing the right SIEM products

Security information and event management products and services collect, analyze and report on security log data from a large number of enterprise security controls, host operating systems, enterprise applications and other software used by an organization. Some SIEMs also attempt to stop attacks in progress that they detect, potentially preventing compromises or limiting the damage that successful compromises could cause.

There are many SIEM systems available today, including light SIEM products designed for organizations that cannot afford or do not feel they need a fully featured SIEM added to their current security operations.

Because light SIEM products offer few capabilities and are much easier to evaluate, they are out of the scope of this article. Instead, this feature points out the capabilities of regular SIEMs and can serve as a guide for creating SIEM evaluation criteria, which merit particularly close attention compared to other security technologies.

It can be quite a challenge to figure out which products to evaluate, let alone to choose the one that’s best for a particular organization or team. Part of the evaluation process involves creating a list of SIEM evaluation criteria potential buyers can use to highlight important capabilities.

1. How much native support does the SIEM provide for relevant log sources?

A SIEM’s value is diminished if it cannot receive and understand log data from all of the log-generating sources in the organization. Most obvious is the organization’s enterprise security controls, such as firewalls, virtual private networks, intrusion prevention systems, email and web security gateways, and antimalware products.

It is reasonable to expect a SIEM to natively understand log files created by any major product or cloud-based service in these categories. If the tool does not, it should have no role in your security operations.

There are many SIEM systems available today, including light SIEM products designed for organizations that cannot afford or do not feel they need a fully featured SIEM added to their current security operations.

In addition, a SIEM should provide native support for log files from the organization’s operating systems. An exception is mobile device operating systems, which often do not provide any security logging capabilities.

SIEMs should also natively support the organization’s major database platforms, as well as any enterprise applications that enable users to interact with sensitive data. Native SIEM support for other software is generally nice to have, but it is not mandatory.

If a SIEM does not natively support a log source, then the organization can either develop customized code to provide the necessary support or use the SIEM without the log source’s data.

2. Can the SIEM supplement existing logging capabilities?

An organization’s particular applications and software may lack robust logging capabilities. Some SIEM systems and services can supplement these by performing their own monitoring in addition to their regular job of log management.

In essence, this extends the SIEM from being strictly a centralized log collection, analysis and reporting tool to also generating raw log data on behalf of other hosts.

3. How effectively can the SIEM make use of threat intelligence?

Most SIEMs are capable of ingesting threat intelligence feeds. These feeds, which are often acquired from separate subscriptions, contain up-to-date information on threat activity observed all over the world, including which hosts are being used to stage or launch attacks and what the characteristics of these attacks are. The greatest value in using these feeds is enabling the SIEM to identify attacks more accurately and to make more informed decisions, often automatically, about which attacks need to be stopped and what the best method is to stop them.

Of course, the quality of threat intelligence varies between vendors. Factors to consider when evaluating threat intelligence should include how often the threat intelligence updates and how the threat intelligence vendor indicates its confidence in the malicious nature of each threat.

4. What forensic capabilities can SIEM products provide?

Forensics capabilities are an evolving SIEM evaluation criteria. Traditionally, SIEMs have only collected data provided by other log sources.

However, recently some SIEM systems have added various forensic capabilities that can collect their own data regarding suspicious activity. A common example is the ability to do full packet captures for a network connection associated with malicious activity. Assuming that these packets are unencrypted, a SIEM analyst can then review their contents more closely to better understand the nature of the packets.

Another aspect of forensics is host activity logging; the SIEM product can perform such logging at all times, or the logging could be triggered when the SIEM tool suspects suspicious activity involving a particular host.

5. What features do SIEM products provide to assist with performing data analysis?

SIEM products that are used for incident detection and handling should provide features that help users to review and analyze the log data for themselves, as well as the SIEM’s own alerts and other findings. One reason for this is that even a highly accurate SIEM will occasionally misinterpret events and generate false positives, so people need to have a way to validate the SIEM’s results.

Another reason for this is that the users involved in security analytics need helpful interfaces to facilitate their investigations. Examples of such interfaces include sophisticated search capabilities and data visualization capabilities.

6. How timely, secure and effective are the SIEM’s automated response capabilities?

Another SIEM evaluation criteria is the product’s automated response capabilities. This is often an organization-specific endeavor because it is highly dependent on the organization’s network architecture, network security controls and other aspects of security management.

For example, a particular SIEM product may not have the ability to direct an organization’s firewall or other network security controls to terminate a malicious connection.

Besides ensuring the SIEM product can communicate its needs to the organization’s other major security controls, it is also important to consider the following characteristics:

  • How long does it take the SIEM to detect an attack and direct the appropriate security controls to stop it?
  • How are the communications between the SIEM and the other security controls protected so as to prevent eavesdropping and alteration?
  • How effective is the SIEM product at stopping attacks before damage occurs?

7. Which security compliance initiatives does the SIEM support with built-in reporting?

Most SIEMs offer highly customizable reporting capabilities. Many of these products also offer built-in support to generate reports that meet the requirements of various security compliance initiatives. Each organization should identify which initiatives are applicable and then ensure that the SIEM product supports as many of these initiatives as possible.

For any initiatives that the SIEM does not support, make sure that the SIEM product supports the proper customizable reporting options to meet your requirements.

Do your homework and evaluate

SIEMs are complex technologies that require extensive integration with enterprise security controls and numerous hosts throughout an organization. To evaluate which tool is best for your organization, it may be helpful to define basic SIEM evaluation criteria. There is not a single SIEM product that is the best system for all organizations; every environment has its own combination of IT characteristics and security needs.

Even the main reason for having a SIEM, such as meeting compliance reporting requirements or aiding in incident detection and handling, may vary widely between organizations. Therefore, each organization should do its own evaluation before acquiring a SIEM product or service. Examine the offerings from several SIEM vendors before even considering deployment.

This article presents several SIEM evaluation criteria that organizations should consider, but other criteria may also be necessary. Think of these as a starting point for the organization to customize and build upon to develop its own list of SIEM evaluation criteria. This will help ensure the organization chooses the best possible SIEM product.

NAND flash manufacturers showcase new technologies

NAND flash manufacturers laid out their roadmaps for next-generation products and architectures at the 2018 Flash Memory Summit this month.

As expected, Intel, Micron, SK Hynix and Toshiba talked up 3D NAND flash chips that can store four bits of data per cell, known as quadruple-level cell (QLC). They also spotlighted their 96-layer 3D NAND and outlined roadmaps that extend to 128 layers and beyond to further boost density.

NAND flash manufacturers introduced new efforts to speed performance, raise density and lower costs. Toshiba launched a low-latency option called XL-Flash. Chinese startup Yangtze Memory Technologies Co. (YMTC) hopes to catch up to the flash chip incumbents with its “Xtacking” architecture that can potentially increase performance and bit density. And South Korea-based chipmaker SK Hynix harbors similar aspirations with its so-called “4D NAND” flash that industry experts say is a misnomer.

Key NAND flash manufacturer Samsung was notably absent from the Flash Memory Summit keynotes, a year after discussing its Z-NAND technology at the conference. Z-NAND is another attempt to reduce costs by shifting periphery logic to a place that doesn’t take up space on the flash chip, said Jim Handy, general director and semiconductor analyst at Objective Analysis.

Here are some of the new technologies that NAND flash manufacturers showcased at last week’s Flash Memory Summit:

Toshiba’s XL-Flash

Toshiba’s XL-Flash is based on the company’s single-level cell (SLC) 3D NAND bit column stacked (BiCS) technology and enables optimization for multi-level cell (MLC) flash. The XL stands for excellent latency, according to Shigeo (Jeff) Ohshima, a technology executive in SSD application engineering at Toshiba Memory Corporation.

Ohshima said XL-Flash requires no additional process and is fully compatible with conventional flash in terms of the command protocol and interface. The read latency of XL-Flash could be 10 times faster than conventional TLC flash devices, according to Ohshima.

He said the company has “a lot of room” to do more with its current 3D NAND BiCS flash technology before new nonvolatile memories such as resistive RAM (ReRAM), magnetoresistive RAM (MRAM), and phase change memory ramp up in volume and become dominant.

“So it ain’t over ’til it’s over,” Ohshima said.

Ohshima said a combination of XL-Flash and denser QLC flash could handle a broad range of application workloads and improve overall system performance over the classic storage architecture of DRAM and HDDs. He noted the performance gap between XL-Flash and QLC flash is considerably smaller than the differential between DRAM and HDDs. And, although XL-Flash is slower than DRAM, it costs less and offers higher capacity.

Industry analysts view Toshiba’s XL-Flash and Samsung’s Z-NAND as a low-latency, flash-based response to 3D XPoint memory technology that Intel and Micron co-developed. Intel last year began shipping 3D XPoint-based SSDs under the brand name Optane, and this year started sampling persistent memory modules that use the 3D XPoint technology. Micron has yet to release products based on 3D XPoint.

David Floyer, CTO and co-founder of Wikibon, said Toshiba’s XL-Flash and Samsung’s Z-NAND will never quite reach the performance of Optane SSDs, but they’ll get “pretty close” and won’t cost anywhere near as much.

Handy expects XL-Flash and Z-NAND to read data at a similar speed to Optane, but he said they “will still be plagued by the extraordinarily slow write cycle that NAND flash is stuck with because of quantum mechanics.”

Startup takes on incumbent NAND flash manufacturers

YMTC hopes to challenge established NAND flash manufacturers with Xtacking. YMTC claims the new architecture can improve efficiency and I/O speed, reduce die size and increase bit density, and shorten development time.

“It really takes courage to go down that path because we know that it’s not easy to make that technology work,” YMTC CEO Simon Yang said.

Unlike conventional NAND, Xtacking separates the processing between the flash cell array and the periphery circuitry, or logic, onto different wafers. The startup claimed the high-voltage transistors that conventional NAND typically uses for the periphery circuit limit NAND I/O speed. YMTC claims Xtacking permits the use of lower voltage transistors that can enable higher I/O and more advanced functions, according to YMTC.

“We really can match the DDR4 I/O speed without any limitation,” Yang said.

Yang said results have been encouraging. He said the flash chip yield is increasing, and the reliability of the memory bits through cycling looks positive. YMTC plans to introduce samples of the new Xtacking-based flash technology into the market early next year, Yang said.

“Hopefully, we can catch up with our friends and contribute to this industry,” Yang said.

YMTC started 3D NAND development in 2014 with a nine-layer test chip and later co-developed a 32-layer test chip with Spansion, which merged with Cypress Semiconductor. YMTC moved the chip into production late last year, but Yang said the company held back on volume ramp-up because the first-generation product was not cost competitive.

“We are very much profit-driven,” Yang said. He later added, “We only want to ramp into volume when it’s cost competitive.”

Handy expressed skepticism that YMTC will be able to meet its cost target, but he said YMTC’s Xtacking efforts might help the company to get to market faster.

SK Hynix 4D NAND flash

SK Hynix came up with a new name to describe its latest NAND flash technology. The company said its “4D NAND” puts the periphery circuitry under the charge-trap-flash-based 3D NAND cell array to reduce chip size, cut the number of process steps and lower overall cost over conventional NAND, in which the periphery circuitry is generally alongside the NAND cell.

But, industry analysts say 4D NAND is merely a catchy marketing term and the approach not unique.

“YMTC is stacking a chip on top of the other, whereas Hynix is putting the logic on the same bit but just building it underneath,” Handy said. “The cost of the chip is a function of how big the die is, and if you tuck things underneath other things, you make the die smaller. What Hynix is doing is a good thing, but I wouldn’t call it an innovation because of the fact that it’s the mainstream product for Intel and Micron.”

Intel and Micron have touted their CMOS under the array (CuA) technology in both their 64-layer QLC and 96-layer TLC flash technologies that they claim reduces die sizes and improves performance over competitive approaches. Handy said Samsung has also discussed putting the logic under the flash chip.

Hyun Ahn, senior vice president of NAND development and business strategy at SK Hynix, said his company’s charge-trap-based 4D NAND roadmap starts at 96 layers with a roadmap that extends to 128 layers and beyond using the same platform.

The first SK Hynix 4D NAND technology will begin sampling in the fourth quarter with 96 stacks of NAND cell, I/O speed of 1.2 Gbps per pin, and a mobile package of 11.5 by 12 mm. The chip size is 30% smaller, and 4D NAND can replace two 256 Gb chips with similar performance, according to SK Hynix.

The new SK Hynix 512 Gb triple-level cell (TLC) 4D NAND improves write performance by 30% and read performance by 25% over the company’s prior 72-stack TLC 3D NAND, with 150% greater power efficiency.

Upcoming 1 terabit (Tb) TLC 4D NAND that SK Hynix will sample in the first half of next year fits into a 16 mm by 20 mm ball grid array (BGA) package with a maximum 2 TB for BGA. An enterprise U.2 SSD using the technology will offer up to 64 TB of capacity, according to SK Hynix.

SK Hynix plans to begin sampling 96-stack QLC 4D NAND, with 1 Tb density in a mono die, in the second half of next year. The company said the QLC 4D NAND would provide more than 20% higher wafer capacity than the TLC NAND that it has been producing since the second half of last year. The 72-stack, enterprise-class 3D NAND will represent more than 50% of SK Hynix NAND production this year, the company said.

5 takeaways from Brad Smith’s speech at the RISE conference – On the Issues

Tapping AI to solve the world’s big problems

Microsoft has long been known for suites of products, Smith said, and the company is now bringing that approach to a new suite of programs, AI for Good. This initiative’s first program, AI for Earth, was started in 2017 and brings advances in computer science to four environmental areas of focus: biodiversity, water, agriculture and climate change.

Under this program, Microsoft is committing $50 million over five years to provide seed grants to nongovernmental organizations, startups and researchers in more than 20 countries, Smith said. The most promising projects will receive additional funding, and Microsoft will use insights gleaned to build new products and tools. The program is already showing success, Smith said — the use of AI helped farmers in Tasmania improve their yields by 15 percent while reducing environmental runoffs. And in Singapore, AI helped reduce electrical consumption in buildings by almost 15 percent.

“We’re finding that AI, indeed, has the potential to help solve some of the world’s most pressing problems,” he said.

Improving accessibility for people with disabilities

Computers can see and hear. They can tell people what’s going on around them. Those abilities position AI to help the more than one billion people worldwide who have disabilities, Smith said.

“One of the things we’ve learned over the last year is that it’s quite possible that AI can do more for people with disabilities than for any other group on the planet,” he said.

Recognizing that potential, Microsoft in May announced AI for Accessibility, a $25 million, five-year initiative focused on using AI to help people with disabilities. The program provides grants of technology, AI expertise and platform-level services to developers, NGOs, inventors and others working on AI-first solutions to improve accessibility. Microsoft is also investing in its own AI-powered solutions, such as real-time, speech-to-text transcription and predictive text functionality.

Smith pointed to Seeing AI, a free Microsoft app designed for people who are blind or have low vision, as an example of the company’s efforts. This app, which provides narration to describe a person’s surroundings, identify currency and even gauge emotions on people’s faces, has been used over four million times since being launched a year ago.

“AI is absolutely a game-changer for people with disabilities,” Smith said.

Governing AI: a Hippocratic Oath for coders?

For AI to fulfill its potential to serve humanity, it must adhere to “timeless values,” Smith said. But defining those values in a diverse world is challenging, he acknowledged. AI is “posing for computers every ethical question that has existed for people,” he said, and requires an approach that takes into account a broad range of philosophies and ethical traditions.

University students and professors have been seeking to create a Hippocratic Oath for AI, Smith said, similar to the pledge doctors take to uphold specific ethical standards. Smith said a broader global conversation about the ethics of AI is needed, and ultimately, a new legal framework.

“We’re going to have to develop these ethical principles, and we’re going to have to work through the details that sometimes will be difficult,” he said. “Because the ultimate question is whether we want to live in a future of artificial intelligence where only ethical people create ethical AI, or whether we want to live in a world where, at least to some degree, ethical AI is required and assured for all of us.

“There’s only one way to do that, and that is with a new generation of laws.”

Lead image credit:  S3studio/Getty Images

Follow Brad Smith on Twitter and LinkedIn.

Critical Cisco vulnerabilities patched in Policy Suite

Cisco disclosed and patched a handful of critical and high-severity vulnerabilities in its products this week.

The company fixed four critical vulnerabilities in its Policy Suite: Two are flaws that enabled remote unauthenticated access to the Policy Builder interface; one flaw is in the Open Systems Gateway initiative (OSGi) interface; and the last is in the Cluster Manager.

A successful exploit of one of the critical Cisco vulnerabilities in Policy Builder — tracked as CVE-2018-0374 — gave attackers access to the database and the ability to change any data in that database. The other vulnerability in the Policy Builder interface — tracked as CVE-2018-0376 — could have enabled an attacker to change existing repositories and create new repositories through the interface.

The third critical vulnerability could have enabled an attacker to directly connect to the OSGi interface remotely and without authentication. Once exploited, an attacker could have accessed or changed any files accessible by the OSGi process.

The last of the critical Cisco vulnerabilities — CVE-2018-0375 — was in the Cluster Manager of Cisco Policy Suite. With this flaw, an attacker could have logged in to remotely use the root account, which has static default credentials, and execute arbitrary commands.

The Cisco Policy Suite manages policies and subscriber data for service providers by connecting to network routers and packet data gateways.

The Cisco vulnerabilities affected Policy Suite releases prior to 18.2.0. The Cisco Product Security Incident Response team has already patched the vulnerabilities and has not seen any exploits in the wild.

Cisco also disclosed and patched seven high-severity flaws in its software-defined WAN (SD-WAN) products, though only one of them can be exploited remotely and without authentication — unlike the four critical vulnerabilities. One vulnerability requires authentication and local access to successfully exploit, but the others only needed authentication to be successfully exploited.

The SD-WAN vulnerabilities gave attackers the ability to overwrite arbitrary files on the operating system and execute arbitrary commands. One was a zero-touch denial-of-service vulnerability, and there were four command injection vulnerabilities.

The company also patched a high-severity denial-of-service vulnerability in the Cisco Nexus 9000 Series Fabric Switches, as well as 16 other medium-severity issues in a variety of its other products.

In other news:

  • Venmo, the mobile payment app owned by PayPal, has its API set to public by default and is exposing user data. According to researcher Hang Do Thi Duc, if a Venmo user accepts the default settings on their account, their transaction details are publicly accessible through the API. “It’s incredibly easy to see what people are buying, who they’re sending money to, and why,” Do Thi Duc said in a blog post. She noted that she was able to gather data on cannabis retailers, lovers’ quarrels and the unhealthy eating habits of users — along with their identifying information. Do Thi Duc was able to gather all of this and more by perusing the public Venmo API and looking specifically at the 207,984,218 transactions left accessible to the public in 2017. “I think it’s problematic that there is a public feed which includes real names, their profile links (to access past transactions), possibly their Facebook IDs and essentially their network of friends they spend time with,” she wrote. “And all of this is so easy to access! I believe this could be designed better.”
  • Multinational telecommunications company Telefonica suffered a data breach that exposed the data of millions of customers. Spanish users of Telefonica’s Movistar telecommunication services may have had their personal and financial information exposed because of the breach, including phone numbers, full names, national ID numbers, addresses, banking information, and call and data records. The breach was discovered after a Movistar user reported it to FACUA, a Spanish consumer rights nonprofit. Because of a design flaw in the Movistar online portal, anyone with a Movistar account could access other users’ data. FACUA notified Telefonica of the breach, and the company responded the next day, at which point FACUA made a public disclosure.
  • Oracle’s July Critical Patch Update (CPU) patched 334 security vulnerabilities, including 61 critical flaws, across many of its products. The most vulnerable affected product is the Oracle Financial Services application, which has 56 vulnerabilities — 21 of which can be exploited over the network without authentication. The vulnerabilities with the highest severity ratings — with a CVSS score of 9.8 — are in Oracle’s Financial Services, Fusion Middleware, PeopleSoft, E-Business Suite, retail applications and others. Over 200 vulnerabilities noted in the Oracle CPU affected business-critical applications. This month’s CPU has the highest number of patches at 334; the runner-up was 308 patches in July 2017.

NSS Labs ranks next-gen firewalls, with some surprises

New testing of next-generation firewalls found that products from seven vendors effectively protected enterprises from malicious traffic for a reasonable total cost of ownership — under $10 per Mbps of network traffic.

NSS Labs released its annual evaluation of next-gen firewalls on Tuesday, offering seven of 10 product recommendations for security effectiveness and total cost of ownership (TCO) based on comparative testing of hardware and software that prevents unauthorized access to networks.

“Our data shows that north of 80% of enterprises deploy next-gen firewalls,” said Jason Brvenik, CTO at NSS Labs, who noted that the market is mature and many of these vendors’ technologies are in refresh cycles.

The research analysts reviewed next-gen firewalls from 10 vendors for the comparative group test, including:

  • Barracuda Networks CloudGen Firewall F800.CCE v7.2.0;
  • Check Point 15600 Next Generation Threat Prevention Appliance vR80.20;
  • Cisco Firepower 4120 Security Appliance v6.2.2;
  • Forcepoint NGFW 2105 Appliance v6.3.3 build 19153 (Update Package: 1056);
  • Fortinet FortiGate 500E V5.6.3GA build 7858;
  • Palo Alto Networks PA-5220 PAN-OS 8.1.1;
  • SonicWall NSa 2650 SonicOS Enhanced 6.5.0.10-73n;
  • Sophos XG Firewall 750 SFO v17 MR7;
  • Versa Networks FlexVNF 16.1R1-S6; and
  • WatchGuard M670 v12.0.1.B562953.

The independent testing involved some cooperation from participating vendors and in some cases help from consultants who verified that the next-gen firewall technology was configured properly using default settings for physical and virtual test environments. NSS Labs did not evaluate systems from Huawei or Juniper Networks because it could not “verify the products,” which researchers claimed was necessary to measure their effectiveness.

Despite the maturity of the NGFW market, the vast majority of enterprises don’t customize default configurations, according to Brvenik. Network security teams disable core protections that are noisy to avoid false positives and create access control policies, but otherwise they trust the vendors’ default recommendations.

The expanding functionality in next-gen firewalls underscores the complexity of protecting enterprise networks against modern threats. In addition to detecting and blocking malicious traffic through the use of dynamic packet filtering and user-defined security policies, next-gen firewalls integrate intrusion prevention systems (IPS), application and user awareness controls, threat intelligence to block malware, SSL and SSH inspection and, in some cases, support for cloud services.

Some products offer a single management console to enable network security teams to monitor firewall deployments and policies, including VPN and IPS, across environments. An assessment of manageability was not part of NSS Labs’ evaluation, however. NSS Labs focused on the firewall technology itself.

Worth the investment?

Researchers used individual test reports and comparison data to assess security effectiveness, which ranged from 25.0% to 99.7%, and total cost of ownership per protected Mbps, which ranged from U.S. $2 to U.S. $57, to determine the value of investments. The testing resulted in overall ratings of “recommended” for seven next-gen firewalls, two “caution” limited value ratings (Check Point and Sophos) and one “security recommended” but higher than average cost (Cisco).

The security effectiveness assessment was based on the product’s ability to enforce security policies and block attacks while passing nonmalicious traffic over a testing period that lasted several hours. Researchers factored in exploit block rates, evasion techniques, stability and reliability, and performance under different traffic conditions. The total cost of ownership per protected Mbps was calculated using a three-year TCO based on capital expenditure for the products divided by security effectiveness times network throughput.

Six of the next-gen firewalls scored 90.3% or higher for security effectiveness, and most products cost less than $10 per protected Mbps of network throughput, according to the report. While the majority of the next-gen firewalls received favorable assessments, four failed to detect one or more common evasion techniques, which could cause a product to completely miss a class of attacks.

Lack of resilience

NSS Labs added a new test in 2018 for resiliency against modified exploits and, according to the report, none of the devices exhibited resilience against all attack variants.

“The most surprising thing that we saw in this test was that … our research and our testing showed that a fair number of firewalls did not demonstrate resilience against changes in attacks that are already known,” Brvenik said.

Enterprises deploy next-gen firewalls to protect their networks from the internet, he added, and as part of that they expect that employees who browse the internet should not have to worry about new threats. Technology innovation related to cloud integration and real-time updates is promising, but key enterprise problems remain unsolved such as the ability to defend against attacks delivered in JavaScript.

“I think one of the greatest opportunities in the market is to handle that traffic,” said Brvenik, who noted that some next-gen firewalls performed adequately in terms of toolkit-based protections, but NSS Labs didn’t observe any of them “wholly mitigating JavaScript.”

TCO in 2018 is trending lower than previous years. While there are a number of very affordable next-gen firewalls on the market, vendors that can’t validate the effectiveness of next-gen firewalls with independent testing to show the technology can consistently deliver on top-level protections, should be questioned, according to Brvenik. Affordable products are a great choice only if they achieve what the enterprise is looking for and “live up to the security climate.”

New HR tools for hourly workers, employee retention announced

This week’s crop of new products touches on nearly every type of employee: hourly workers, those with benefits and those too talented to lose.

Sapho’s new Employee Experience Portal 5.0 aims squarely at the growing — some would say underserved — population of hourly workers. Using a mobile phone, tablet or the web, hourly workers can manage their shifts, deal with time entry and even get access to relevant company information.

The goal is to make life easier for employees and employers, said Peter Yared, CTO of Sapho, based in San Bruno, Calif. “There is a seismic shift happening now around hourly workers,” Yared said. “Wages are increasing, and companies are starting to invest in IT that supports their hourly workers.”

One thing that makes HR tricky with hourly workers is the high degree of employee turnover, Yared said. Ideally, once an employee is trained, he remains. But training can be difficult in companies with complicated legacy systems, and that can lead to increased turnover. Sapho’s new platform could be a place where basic training information is easily available, Yared said, offering hourly workers an information safety net.

“The cost of training hourly workers is astronomical,” he said. “This is one place where an employer could potentially see a lot of ROI.”

More subtle benefits are also possible, Yared suggested. Not all hourly workers have cellphones with data plans, meaning they can’t easily get online to find out about shifts, changes in company policies or even to request a day off. The more easily an hourly worker can have access to basic information and company-specific news that could be important, the better the connection between employer and employee, Yared said.

“All of a sudden, companies are ready for this now,” he said. “They’re going to fix the interactions IT has with their hourly workers.”

Keep top talent happy

Limeade’s Turnover Dashboard now has a feature that can help identify employees most likely to be job hunting. With a strong U.S. economy and high job demand in many sectors, the risk of employee turnover can be high in some areas.

Using machine learning, the new Turnover Dashboard feature lets HR pros look at detailed data from different departments, locations or countries, and then break that data down in to subgroups, as necessary. Once the groups are identified, the platform can reach out to those at the highest risk of leaving with content and activities designed to increase employee engagement. Over time, the machine learning algorithm will grow smarter about what matters to a company’s top talent and provide more detailed information.

At the heart of this system are 40 variables — pulled from data science analysis — including Limeade’s Well-Being Assessment responses and activity on the platform itself.

A cloud-based benefits platform

EBenefits, a benefits administration company that is part of the University of Pittsburgh Medical Center’s insurance division, has begun to demo a cloud-based benefit platform. The platform offers a way for employers to not only provide benefit choices to employees, but also to keep tabs on what matters most through the use of data analytics. Employees can research options, including standard benefits, private exchanges and matters related to compliance with the U.S. Affordable Care Act.