Tag Archives: products

SIEM evaluation criteria: Choosing the right SIEM products

Security information and event management products and services collect, analyze and report on security log data from a large number of enterprise security controls, host operating systems, enterprise applications and other software used by an organization. Some SIEMs also attempt to stop attacks in progress that they detect, potentially preventing compromises or limiting the damage that successful compromises could cause.

There are many SIEM systems available today, including light SIEM products designed for organizations that cannot afford or do not feel they need a fully featured SIEM added to their current security operations.

Because light SIEM products offer few capabilities and are much easier to evaluate, they are out of the scope of this article. Instead, this feature points out the capabilities of regular SIEMs and can serve as a guide for creating SIEM evaluation criteria, which merit particularly close attention compared to other security technologies.

It can be quite a challenge to figure out which products to evaluate, let alone to choose the one that’s best for a particular organization or team. Part of the evaluation process involves creating a list of SIEM evaluation criteria potential buyers can use to highlight important capabilities.

1. How much native support does the SIEM provide for relevant log sources?

A SIEM’s value is diminished if it cannot receive and understand log data from all of the log-generating sources in the organization. Most obvious is the organization’s enterprise security controls, such as firewalls, virtual private networks, intrusion prevention systems, email and web security gateways, and antimalware products.

It is reasonable to expect a SIEM to natively understand log files created by any major product or cloud-based service in these categories. If the tool does not, it should have no role in your security operations.

There are many SIEM systems available today, including light SIEM products designed for organizations that cannot afford or do not feel they need a fully featured SIEM added to their current security operations.

In addition, a SIEM should provide native support for log files from the organization’s operating systems. An exception is mobile device operating systems, which often do not provide any security logging capabilities.

SIEMs should also natively support the organization’s major database platforms, as well as any enterprise applications that enable users to interact with sensitive data. Native SIEM support for other software is generally nice to have, but it is not mandatory.

If a SIEM does not natively support a log source, then the organization can either develop customized code to provide the necessary support or use the SIEM without the log source’s data.

2. Can the SIEM supplement existing logging capabilities?

An organization’s particular applications and software may lack robust logging capabilities. Some SIEM systems and services can supplement these by performing their own monitoring in addition to their regular job of log management.

In essence, this extends the SIEM from being strictly a centralized log collection, analysis and reporting tool to also generating raw log data on behalf of other hosts.

3. How effectively can the SIEM make use of threat intelligence?

Most SIEMs are capable of ingesting threat intelligence feeds. These feeds, which are often acquired from separate subscriptions, contain up-to-date information on threat activity observed all over the world, including which hosts are being used to stage or launch attacks and what the characteristics of these attacks are. The greatest value in using these feeds is enabling the SIEM to identify attacks more accurately and to make more informed decisions, often automatically, about which attacks need to be stopped and what the best method is to stop them.

Of course, the quality of threat intelligence varies between vendors. Factors to consider when evaluating threat intelligence should include how often the threat intelligence updates and how the threat intelligence vendor indicates its confidence in the malicious nature of each threat.

4. What forensic capabilities can SIEM products provide?

Forensics capabilities are an evolving SIEM evaluation criteria. Traditionally, SIEMs have only collected data provided by other log sources.

However, recently some SIEM systems have added various forensic capabilities that can collect their own data regarding suspicious activity. A common example is the ability to do full packet captures for a network connection associated with malicious activity. Assuming that these packets are unencrypted, a SIEM analyst can then review their contents more closely to better understand the nature of the packets.

Another aspect of forensics is host activity logging; the SIEM product can perform such logging at all times, or the logging could be triggered when the SIEM tool suspects suspicious activity involving a particular host.

5. What features do SIEM products provide to assist with performing data analysis?

SIEM products that are used for incident detection and handling should provide features that help users to review and analyze the log data for themselves, as well as the SIEM’s own alerts and other findings. One reason for this is that even a highly accurate SIEM will occasionally misinterpret events and generate false positives, so people need to have a way to validate the SIEM’s results.

Another reason for this is that the users involved in security analytics need helpful interfaces to facilitate their investigations. Examples of such interfaces include sophisticated search capabilities and data visualization capabilities.

6. How timely, secure and effective are the SIEM’s automated response capabilities?

Another SIEM evaluation criteria is the product’s automated response capabilities. This is often an organization-specific endeavor because it is highly dependent on the organization’s network architecture, network security controls and other aspects of security management.

For example, a particular SIEM product may not have the ability to direct an organization’s firewall or other network security controls to terminate a malicious connection.

Besides ensuring the SIEM product can communicate its needs to the organization’s other major security controls, it is also important to consider the following characteristics:

  • How long does it take the SIEM to detect an attack and direct the appropriate security controls to stop it?
  • How are the communications between the SIEM and the other security controls protected so as to prevent eavesdropping and alteration?
  • How effective is the SIEM product at stopping attacks before damage occurs?

7. Which security compliance initiatives does the SIEM support with built-in reporting?

Most SIEMs offer highly customizable reporting capabilities. Many of these products also offer built-in support to generate reports that meet the requirements of various security compliance initiatives. Each organization should identify which initiatives are applicable and then ensure that the SIEM product supports as many of these initiatives as possible.

For any initiatives that the SIEM does not support, make sure that the SIEM product supports the proper customizable reporting options to meet your requirements.

Do your homework and evaluate

SIEMs are complex technologies that require extensive integration with enterprise security controls and numerous hosts throughout an organization. To evaluate which tool is best for your organization, it may be helpful to define basic SIEM evaluation criteria. There is not a single SIEM product that is the best system for all organizations; every environment has its own combination of IT characteristics and security needs.

Even the main reason for having a SIEM, such as meeting compliance reporting requirements or aiding in incident detection and handling, may vary widely between organizations. Therefore, each organization should do its own evaluation before acquiring a SIEM product or service. Examine the offerings from several SIEM vendors before even considering deployment.

This article presents several SIEM evaluation criteria that organizations should consider, but other criteria may also be necessary. Think of these as a starting point for the organization to customize and build upon to develop its own list of SIEM evaluation criteria. This will help ensure the organization chooses the best possible SIEM product.

NAND flash manufacturers showcase new technologies

NAND flash manufacturers laid out their roadmaps for next-generation products and architectures at the 2018 Flash Memory Summit this month.

As expected, Intel, Micron, SK Hynix and Toshiba talked up 3D NAND flash chips that can store four bits of data per cell, known as quadruple-level cell (QLC). They also spotlighted their 96-layer 3D NAND and outlined roadmaps that extend to 128 layers and beyond to further boost density.

NAND flash manufacturers introduced new efforts to speed performance, raise density and lower costs. Toshiba launched a low-latency option called XL-Flash. Chinese startup Yangtze Memory Technologies Co. (YMTC) hopes to catch up to the flash chip incumbents with its “Xtacking” architecture that can potentially increase performance and bit density. And South Korea-based chipmaker SK Hynix harbors similar aspirations with its so-called “4D NAND” flash that industry experts say is a misnomer.

Key NAND flash manufacturer Samsung was notably absent from the Flash Memory Summit keynotes, a year after discussing its Z-NAND technology at the conference. Z-NAND is another attempt to reduce costs by shifting periphery logic to a place that doesn’t take up space on the flash chip, said Jim Handy, general director and semiconductor analyst at Objective Analysis.

Here are some of the new technologies that NAND flash manufacturers showcased at last week’s Flash Memory Summit:

Toshiba’s XL-Flash

Toshiba’s XL-Flash is based on the company’s single-level cell (SLC) 3D NAND bit column stacked (BiCS) technology and enables optimization for multi-level cell (MLC) flash. The XL stands for excellent latency, according to Shigeo (Jeff) Ohshima, a technology executive in SSD application engineering at Toshiba Memory Corporation.

Ohshima said XL-Flash requires no additional process and is fully compatible with conventional flash in terms of the command protocol and interface. The read latency of XL-Flash could be 10 times faster than conventional TLC flash devices, according to Ohshima.

He said the company has “a lot of room” to do more with its current 3D NAND BiCS flash technology before new nonvolatile memories such as resistive RAM (ReRAM), magnetoresistive RAM (MRAM), and phase change memory ramp up in volume and become dominant.

“So it ain’t over ’til it’s over,” Ohshima said.

Ohshima said a combination of XL-Flash and denser QLC flash could handle a broad range of application workloads and improve overall system performance over the classic storage architecture of DRAM and HDDs. He noted the performance gap between XL-Flash and QLC flash is considerably smaller than the differential between DRAM and HDDs. And, although XL-Flash is slower than DRAM, it costs less and offers higher capacity.

Industry analysts view Toshiba’s XL-Flash and Samsung’s Z-NAND as a low-latency, flash-based response to 3D XPoint memory technology that Intel and Micron co-developed. Intel last year began shipping 3D XPoint-based SSDs under the brand name Optane, and this year started sampling persistent memory modules that use the 3D XPoint technology. Micron has yet to release products based on 3D XPoint.

David Floyer, CTO and co-founder of Wikibon, said Toshiba’s XL-Flash and Samsung’s Z-NAND will never quite reach the performance of Optane SSDs, but they’ll get “pretty close” and won’t cost anywhere near as much.

Handy expects XL-Flash and Z-NAND to read data at a similar speed to Optane, but he said they “will still be plagued by the extraordinarily slow write cycle that NAND flash is stuck with because of quantum mechanics.”

Startup takes on incumbent NAND flash manufacturers

YMTC hopes to challenge established NAND flash manufacturers with Xtacking. YMTC claims the new architecture can improve efficiency and I/O speed, reduce die size and increase bit density, and shorten development time.

“It really takes courage to go down that path because we know that it’s not easy to make that technology work,” YMTC CEO Simon Yang said.

Unlike conventional NAND, Xtacking separates the processing between the flash cell array and the periphery circuitry, or logic, onto different wafers. The startup claimed the high-voltage transistors that conventional NAND typically uses for the periphery circuit limit NAND I/O speed. YMTC claims Xtacking permits the use of lower voltage transistors that can enable higher I/O and more advanced functions, according to YMTC.

“We really can match the DDR4 I/O speed without any limitation,” Yang said.

Yang said results have been encouraging. He said the flash chip yield is increasing, and the reliability of the memory bits through cycling looks positive. YMTC plans to introduce samples of the new Xtacking-based flash technology into the market early next year, Yang said.

“Hopefully, we can catch up with our friends and contribute to this industry,” Yang said.

YMTC started 3D NAND development in 2014 with a nine-layer test chip and later co-developed a 32-layer test chip with Spansion, which merged with Cypress Semiconductor. YMTC moved the chip into production late last year, but Yang said the company held back on volume ramp-up because the first-generation product was not cost competitive.

“We are very much profit-driven,” Yang said. He later added, “We only want to ramp into volume when it’s cost competitive.”

Handy expressed skepticism that YMTC will be able to meet its cost target, but he said YMTC’s Xtacking efforts might help the company to get to market faster.

SK Hynix 4D NAND flash

SK Hynix came up with a new name to describe its latest NAND flash technology. The company said its “4D NAND” puts the periphery circuitry under the charge-trap-flash-based 3D NAND cell array to reduce chip size, cut the number of process steps and lower overall cost over conventional NAND, in which the periphery circuitry is generally alongside the NAND cell.

But, industry analysts say 4D NAND is merely a catchy marketing term and the approach not unique.

“YMTC is stacking a chip on top of the other, whereas Hynix is putting the logic on the same bit but just building it underneath,” Handy said. “The cost of the chip is a function of how big the die is, and if you tuck things underneath other things, you make the die smaller. What Hynix is doing is a good thing, but I wouldn’t call it an innovation because of the fact that it’s the mainstream product for Intel and Micron.”

Intel and Micron have touted their CMOS under the array (CuA) technology in both their 64-layer QLC and 96-layer TLC flash technologies that they claim reduces die sizes and improves performance over competitive approaches. Handy said Samsung has also discussed putting the logic under the flash chip.

Hyun Ahn, senior vice president of NAND development and business strategy at SK Hynix, said his company’s charge-trap-based 4D NAND roadmap starts at 96 layers with a roadmap that extends to 128 layers and beyond using the same platform.

The first SK Hynix 4D NAND technology will begin sampling in the fourth quarter with 96 stacks of NAND cell, I/O speed of 1.2 Gbps per pin, and a mobile package of 11.5 by 12 mm. The chip size is 30% smaller, and 4D NAND can replace two 256 Gb chips with similar performance, according to SK Hynix.

The new SK Hynix 512 Gb triple-level cell (TLC) 4D NAND improves write performance by 30% and read performance by 25% over the company’s prior 72-stack TLC 3D NAND, with 150% greater power efficiency.

Upcoming 1 terabit (Tb) TLC 4D NAND that SK Hynix will sample in the first half of next year fits into a 16 mm by 20 mm ball grid array (BGA) package with a maximum 2 TB for BGA. An enterprise U.2 SSD using the technology will offer up to 64 TB of capacity, according to SK Hynix.

SK Hynix plans to begin sampling 96-stack QLC 4D NAND, with 1 Tb density in a mono die, in the second half of next year. The company said the QLC 4D NAND would provide more than 20% higher wafer capacity than the TLC NAND that it has been producing since the second half of last year. The 72-stack, enterprise-class 3D NAND will represent more than 50% of SK Hynix NAND production this year, the company said.

5 takeaways from Brad Smith’s speech at the RISE conference – On the Issues

Tapping AI to solve the world’s big problems

Microsoft has long been known for suites of products, Smith said, and the company is now bringing that approach to a new suite of programs, AI for Good. This initiative’s first program, AI for Earth, was started in 2017 and brings advances in computer science to four environmental areas of focus: biodiversity, water, agriculture and climate change.

Under this program, Microsoft is committing $50 million over five years to provide seed grants to nongovernmental organizations, startups and researchers in more than 20 countries, Smith said. The most promising projects will receive additional funding, and Microsoft will use insights gleaned to build new products and tools. The program is already showing success, Smith said — the use of AI helped farmers in Tasmania improve their yields by 15 percent while reducing environmental runoffs. And in Singapore, AI helped reduce electrical consumption in buildings by almost 15 percent.

“We’re finding that AI, indeed, has the potential to help solve some of the world’s most pressing problems,” he said.

Improving accessibility for people with disabilities

Computers can see and hear. They can tell people what’s going on around them. Those abilities position AI to help the more than one billion people worldwide who have disabilities, Smith said.

“One of the things we’ve learned over the last year is that it’s quite possible that AI can do more for people with disabilities than for any other group on the planet,” he said.

Recognizing that potential, Microsoft in May announced AI for Accessibility, a $25 million, five-year initiative focused on using AI to help people with disabilities. The program provides grants of technology, AI expertise and platform-level services to developers, NGOs, inventors and others working on AI-first solutions to improve accessibility. Microsoft is also investing in its own AI-powered solutions, such as real-time, speech-to-text transcription and predictive text functionality.

Smith pointed to Seeing AI, a free Microsoft app designed for people who are blind or have low vision, as an example of the company’s efforts. This app, which provides narration to describe a person’s surroundings, identify currency and even gauge emotions on people’s faces, has been used over four million times since being launched a year ago.

“AI is absolutely a game-changer for people with disabilities,” Smith said.

Governing AI: a Hippocratic Oath for coders?

For AI to fulfill its potential to serve humanity, it must adhere to “timeless values,” Smith said. But defining those values in a diverse world is challenging, he acknowledged. AI is “posing for computers every ethical question that has existed for people,” he said, and requires an approach that takes into account a broad range of philosophies and ethical traditions.

University students and professors have been seeking to create a Hippocratic Oath for AI, Smith said, similar to the pledge doctors take to uphold specific ethical standards. Smith said a broader global conversation about the ethics of AI is needed, and ultimately, a new legal framework.

“We’re going to have to develop these ethical principles, and we’re going to have to work through the details that sometimes will be difficult,” he said. “Because the ultimate question is whether we want to live in a future of artificial intelligence where only ethical people create ethical AI, or whether we want to live in a world where, at least to some degree, ethical AI is required and assured for all of us.

“There’s only one way to do that, and that is with a new generation of laws.”

Lead image credit:  S3studio/Getty Images

Follow Brad Smith on Twitter and LinkedIn.

Critical Cisco vulnerabilities patched in Policy Suite

Cisco disclosed and patched a handful of critical and high-severity vulnerabilities in its products this week.

The company fixed four critical vulnerabilities in its Policy Suite: Two are flaws that enabled remote unauthenticated access to the Policy Builder interface; one flaw is in the Open Systems Gateway initiative (OSGi) interface; and the last is in the Cluster Manager.

A successful exploit of one of the critical Cisco vulnerabilities in Policy Builder — tracked as CVE-2018-0374 — gave attackers access to the database and the ability to change any data in that database. The other vulnerability in the Policy Builder interface — tracked as CVE-2018-0376 — could have enabled an attacker to change existing repositories and create new repositories through the interface.

The third critical vulnerability could have enabled an attacker to directly connect to the OSGi interface remotely and without authentication. Once exploited, an attacker could have accessed or changed any files accessible by the OSGi process.

The last of the critical Cisco vulnerabilities — CVE-2018-0375 — was in the Cluster Manager of Cisco Policy Suite. With this flaw, an attacker could have logged in to remotely use the root account, which has static default credentials, and execute arbitrary commands.

The Cisco Policy Suite manages policies and subscriber data for service providers by connecting to network routers and packet data gateways.

The Cisco vulnerabilities affected Policy Suite releases prior to 18.2.0. The Cisco Product Security Incident Response team has already patched the vulnerabilities and has not seen any exploits in the wild.

Cisco also disclosed and patched seven high-severity flaws in its software-defined WAN (SD-WAN) products, though only one of them can be exploited remotely and without authentication — unlike the four critical vulnerabilities. One vulnerability requires authentication and local access to successfully exploit, but the others only needed authentication to be successfully exploited.

The SD-WAN vulnerabilities gave attackers the ability to overwrite arbitrary files on the operating system and execute arbitrary commands. One was a zero-touch denial-of-service vulnerability, and there were four command injection vulnerabilities.

The company also patched a high-severity denial-of-service vulnerability in the Cisco Nexus 9000 Series Fabric Switches, as well as 16 other medium-severity issues in a variety of its other products.

In other news:

  • Venmo, the mobile payment app owned by PayPal, has its API set to public by default and is exposing user data. According to researcher Hang Do Thi Duc, if a Venmo user accepts the default settings on their account, their transaction details are publicly accessible through the API. “It’s incredibly easy to see what people are buying, who they’re sending money to, and why,” Do Thi Duc said in a blog post. She noted that she was able to gather data on cannabis retailers, lovers’ quarrels and the unhealthy eating habits of users — along with their identifying information. Do Thi Duc was able to gather all of this and more by perusing the public Venmo API and looking specifically at the 207,984,218 transactions left accessible to the public in 2017. “I think it’s problematic that there is a public feed which includes real names, their profile links (to access past transactions), possibly their Facebook IDs and essentially their network of friends they spend time with,” she wrote. “And all of this is so easy to access! I believe this could be designed better.”
  • Multinational telecommunications company Telefonica suffered a data breach that exposed the data of millions of customers. Spanish users of Telefonica’s Movistar telecommunication services may have had their personal and financial information exposed because of the breach, including phone numbers, full names, national ID numbers, addresses, banking information, and call and data records. The breach was discovered after a Movistar user reported it to FACUA, a Spanish consumer rights nonprofit. Because of a design flaw in the Movistar online portal, anyone with a Movistar account could access other users’ data. FACUA notified Telefonica of the breach, and the company responded the next day, at which point FACUA made a public disclosure.
  • Oracle’s July Critical Patch Update (CPU) patched 334 security vulnerabilities, including 61 critical flaws, across many of its products. The most vulnerable affected product is the Oracle Financial Services application, which has 56 vulnerabilities — 21 of which can be exploited over the network without authentication. The vulnerabilities with the highest severity ratings — with a CVSS score of 9.8 — are in Oracle’s Financial Services, Fusion Middleware, PeopleSoft, E-Business Suite, retail applications and others. Over 200 vulnerabilities noted in the Oracle CPU affected business-critical applications. This month’s CPU has the highest number of patches at 334; the runner-up was 308 patches in July 2017.

NSS Labs ranks next-gen firewalls, with some surprises

New testing of next-generation firewalls found that products from seven vendors effectively protected enterprises from malicious traffic for a reasonable total cost of ownership — under $10 per Mbps of network traffic.

NSS Labs released its annual evaluation of next-gen firewalls on Tuesday, offering seven of 10 product recommendations for security effectiveness and total cost of ownership (TCO) based on comparative testing of hardware and software that prevents unauthorized access to networks.

“Our data shows that north of 80% of enterprises deploy next-gen firewalls,” said Jason Brvenik, CTO at NSS Labs, who noted that the market is mature and many of these vendors’ technologies are in refresh cycles.

The research analysts reviewed next-gen firewalls from 10 vendors for the comparative group test, including:

  • Barracuda Networks CloudGen Firewall F800.CCE v7.2.0;
  • Check Point 15600 Next Generation Threat Prevention Appliance vR80.20;
  • Cisco Firepower 4120 Security Appliance v6.2.2;
  • Forcepoint NGFW 2105 Appliance v6.3.3 build 19153 (Update Package: 1056);
  • Fortinet FortiGate 500E V5.6.3GA build 7858;
  • Palo Alto Networks PA-5220 PAN-OS 8.1.1;
  • SonicWall NSa 2650 SonicOS Enhanced 6.5.0.10-73n;
  • Sophos XG Firewall 750 SFO v17 MR7;
  • Versa Networks FlexVNF 16.1R1-S6; and
  • WatchGuard M670 v12.0.1.B562953.

The independent testing involved some cooperation from participating vendors and in some cases help from consultants who verified that the next-gen firewall technology was configured properly using default settings for physical and virtual test environments. NSS Labs did not evaluate systems from Huawei or Juniper Networks because it could not “verify the products,” which researchers claimed was necessary to measure their effectiveness.

Despite the maturity of the NGFW market, the vast majority of enterprises don’t customize default configurations, according to Brvenik. Network security teams disable core protections that are noisy to avoid false positives and create access control policies, but otherwise they trust the vendors’ default recommendations.

The expanding functionality in next-gen firewalls underscores the complexity of protecting enterprise networks against modern threats. In addition to detecting and blocking malicious traffic through the use of dynamic packet filtering and user-defined security policies, next-gen firewalls integrate intrusion prevention systems (IPS), application and user awareness controls, threat intelligence to block malware, SSL and SSH inspection and, in some cases, support for cloud services.

Some products offer a single management console to enable network security teams to monitor firewall deployments and policies, including VPN and IPS, across environments. An assessment of manageability was not part of NSS Labs’ evaluation, however. NSS Labs focused on the firewall technology itself.

Worth the investment?

Researchers used individual test reports and comparison data to assess security effectiveness, which ranged from 25.0% to 99.7%, and total cost of ownership per protected Mbps, which ranged from U.S. $2 to U.S. $57, to determine the value of investments. The testing resulted in overall ratings of “recommended” for seven next-gen firewalls, two “caution” limited value ratings (Check Point and Sophos) and one “security recommended” but higher than average cost (Cisco).

The security effectiveness assessment was based on the product’s ability to enforce security policies and block attacks while passing nonmalicious traffic over a testing period that lasted several hours. Researchers factored in exploit block rates, evasion techniques, stability and reliability, and performance under different traffic conditions. The total cost of ownership per protected Mbps was calculated using a three-year TCO based on capital expenditure for the products divided by security effectiveness times network throughput.

Six of the next-gen firewalls scored 90.3% or higher for security effectiveness, and most products cost less than $10 per protected Mbps of network throughput, according to the report. While the majority of the next-gen firewalls received favorable assessments, four failed to detect one or more common evasion techniques, which could cause a product to completely miss a class of attacks.

Lack of resilience

NSS Labs added a new test in 2018 for resiliency against modified exploits and, according to the report, none of the devices exhibited resilience against all attack variants.

“The most surprising thing that we saw in this test was that … our research and our testing showed that a fair number of firewalls did not demonstrate resilience against changes in attacks that are already known,” Brvenik said.

Enterprises deploy next-gen firewalls to protect their networks from the internet, he added, and as part of that they expect that employees who browse the internet should not have to worry about new threats. Technology innovation related to cloud integration and real-time updates is promising, but key enterprise problems remain unsolved such as the ability to defend against attacks delivered in JavaScript.

“I think one of the greatest opportunities in the market is to handle that traffic,” said Brvenik, who noted that some next-gen firewalls performed adequately in terms of toolkit-based protections, but NSS Labs didn’t observe any of them “wholly mitigating JavaScript.”

TCO in 2018 is trending lower than previous years. While there are a number of very affordable next-gen firewalls on the market, vendors that can’t validate the effectiveness of next-gen firewalls with independent testing to show the technology can consistently deliver on top-level protections, should be questioned, according to Brvenik. Affordable products are a great choice only if they achieve what the enterprise is looking for and “live up to the security climate.”

New HR tools for hourly workers, employee retention announced

This week’s crop of new products touches on nearly every type of employee: hourly workers, those with benefits and those too talented to lose.

Sapho’s new Employee Experience Portal 5.0 aims squarely at the growing — some would say underserved — population of hourly workers. Using a mobile phone, tablet or the web, hourly workers can manage their shifts, deal with time entry and even get access to relevant company information.

The goal is to make life easier for employees and employers, said Peter Yared, CTO of Sapho, based in San Bruno, Calif. “There is a seismic shift happening now around hourly workers,” Yared said. “Wages are increasing, and companies are starting to invest in IT that supports their hourly workers.”

One thing that makes HR tricky with hourly workers is the high degree of employee turnover, Yared said. Ideally, once an employee is trained, he remains. But training can be difficult in companies with complicated legacy systems, and that can lead to increased turnover. Sapho’s new platform could be a place where basic training information is easily available, Yared said, offering hourly workers an information safety net.

“The cost of training hourly workers is astronomical,” he said. “This is one place where an employer could potentially see a lot of ROI.”

More subtle benefits are also possible, Yared suggested. Not all hourly workers have cellphones with data plans, meaning they can’t easily get online to find out about shifts, changes in company policies or even to request a day off. The more easily an hourly worker can have access to basic information and company-specific news that could be important, the better the connection between employer and employee, Yared said.

“All of a sudden, companies are ready for this now,” he said. “They’re going to fix the interactions IT has with their hourly workers.”

Keep top talent happy

Limeade’s Turnover Dashboard now has a feature that can help identify employees most likely to be job hunting. With a strong U.S. economy and high job demand in many sectors, the risk of employee turnover can be high in some areas.

Using machine learning, the new Turnover Dashboard feature lets HR pros look at detailed data from different departments, locations or countries, and then break that data down in to subgroups, as necessary. Once the groups are identified, the platform can reach out to those at the highest risk of leaving with content and activities designed to increase employee engagement. Over time, the machine learning algorithm will grow smarter about what matters to a company’s top talent and provide more detailed information.

At the heart of this system are 40 variables — pulled from data science analysis — including Limeade’s Well-Being Assessment responses and activity on the platform itself.

A cloud-based benefits platform

EBenefits, a benefits administration company that is part of the University of Pittsburgh Medical Center’s insurance division, has begun to demo a cloud-based benefit platform. The platform offers a way for employers to not only provide benefit choices to employees, but also to keep tabs on what matters most through the use of data analytics. Employees can research options, including standard benefits, private exchanges and matters related to compliance with the U.S. Affordable Care Act.

Azure PaaS strategy hones in on hybrid cloud, containers

Evaluate
Weigh the pros and cons of technologies, products and projects you are considering.

Microsoft’s PaaS offerings might have a leg-up in terms of support for hybrid deployments, but the vendor still faces tough competition in a quickly evolving app-dev market.


The Azure PaaS portfolio continues to offer a compelling story for companies that need a development environment…

“;
}
});

/**
* remove unnecessary class from ul
*/
$(“#inlineregform”).find( “ul” ).removeClass(“default-list”);

/**
* Replace “errorMessageInput” class with “sign-up-error-msg” class
*/
function renameErrorMsgClass() {
$(“.errorMessageInput”).each(function() {
if ($(this).hasClass(“hidden”)) {
$(this).removeClass(“errorMessageInput hidden”).addClass(“sign-up-error-msg hidden”);
} else {
$(this).removeClass(“errorMessageInput”).addClass(“sign-up-error-msg”);
}
});
}

/**
* when validation function is called, replace “errorMessageInput” with “sign-up-error-msg”
* before return
*/
function validateThis(v, form) {
var validateReturn = urValidation.validate(v, form);
renameErrorMsgClass();
return validateReturn;
}

/**
* DoC pop-up window js – included in moScripts.js which is not included in responsive page
*/
$(“#inlineRegistration”).on(“click”,”a.consentWindow”, function(e) {
window.open(this.href, “Consent”, “width=500,height=600,scrollbars=1”);
e.preventDefault();
});

where legacy applications can move freely between on premises and the cloud. But even as the vendor increasingly embraces hybrid cloud, open source and emerging technologies, such as containers and IoT, it still faces tough competition from the likes of Google and AWS.

Strong foundation

Azure App Service is Microsoft’s flagship PaaS offering, enabling developers to build and deploy web and mobile applications in a variety of programming languages — without having to manage the underlying infrastructure.

But App Service represents just one of many services that Microsoft has rolled out over the years to help developers create, test, debug and extend application code. The company’s Visual Studio line, for example, now includes four product families: Visual Studio Integrated Development Environment, Visual Studio Team Services, Visual Studio Code and Visual Studio App Center, which includes connections to GitHub, Bitbucket and VSTS repositories to support continuous integration.

Microsoft has also created a vast independent software vendor and developer community, and has tightly integrated many of its development tools, according to Jeffrey Kaplan, managing director at THINKstrategies, Inc. Visual Studio and SQL Server, for example, support common design points and feature high levels of integration with App Service.

Microsoft’s Azure PaaS strategy is also unique in its focus on hybrid cloud deployments. Through its Hybrid Connections feature, for example, developers can build and manage cloud applications that access on-premises resources. What’s more, Azure App Service is also available for Azure Stack — Microsoft’s integrated hardware and software platform designed to bring Azure public cloud services to enterprises’ local data centers and simplify the deployment and management of hybrid cloud applications.

Missing pieces

But despite its broad portfolio and hybrid focus, Azure PaaS is not a panacea. While many traditional IT departments have embraced the offering, it hasn’t been as popular in business units, which now drive development initiatives in many organizations, according to Larry Carvalho, research director for IDC’s PaaS practice.

What’s more, organizations that don’t have a large footprint of legacy systems often prefer open source development tools, rather than tools like Visual Studio. Traditionally, Microsoft hasn’t offered support for open source technology as quickly as other cloud market leaders, such as AWS, according to Carvalho. This is likely because competitors like AWS are not weighed down by support for legacy products.

But while, historically, Microsoft’s business model has been antithetical to the open source approach, that’s started to change. The company has made an effort to embrace more open source technologies and recently purchased GitHub, a version control platform founded on the open source code management system Git.

The evolving face of PaaS

The PaaS landscape is evolving rapidly. Rather than traditional VMs, developers increasingly focus on containers, and interest in DevOps continues to rise. In an attempt to align with these trends, Microsoft now offers a managed Kubernetes service on its public cloud and recently added Azure Container Instances to enable developers to spin up new container workloads without having to manage the underlying server infrastructure.

Additionally, enterprises have a growing interest in application development for AI, machine learning and IoT platforms. And while Azure PaaS tools offer support for these technologies, Microsoft still needs to compete against fellow market leaders, AWS and Google — the latter of which has garnered a lot of attention for its development of TensorFlow, an open source machine learning framework.

Dig Deeper on Platform as a Service and cloud computing

Inside the private event where Microsoft, Google, Salesforce and other rivals share security secrets

Speaking this week on the Microsoft campus, L-R: Erik Bloch, Salesforce security products and program management director; Alex Maestretti, engineering manager on the Netflix Security Intelligence and Response Team; David Seidman, Google security engineering manager; and Chang Kawaguchi, director for Microsoft Office 365 security. (GeekWire Photos / Todd Bishop)

REDMOND, Wash. — At first glance, the gathering inside Building 99 at Microsoft this week looked like many others inside the company, as technical experts shared hard-earned lessons for using machine learning to defend against hackers.

Ram Shankar Siva Kumar, Microsoft security data wrangler, spearheaded the event.

It looked normal, that is, until you spotted the person in the blue Google shirt addressing the group, next to speakers from Salesforce, Netflix and Microsoft, at a day-long event that included representatives of Facebook, Amazon and other big cloud providers and services that would normally treat technical insights as closely guarded secrets.

As the afternoon session ended, the organizer from Microsoft, security data wrangler Ram Shankar Siva Kumar, complimented panelist Erik Bloch, the Salesforce security products and program management director, for “really channeling the Ohana spirit,” referencing the Hawaiian word for “family,” which Salesforce uses to describe its internal culture of looking out for one another.

It was almost enough to make a person forget the bitter rivalry between Microsoft and Salesforce.

Siva Kumar then gave attendees advice on finding the location of the closing reception. “You can Bing it, Google it, whatever it is,” he said, as the audience laughed at the rare concession to Microsoft’s longtime competitor.

It was no ordinary gathering at Microsoft, but then again, it’s no ordinary time in tech. The Security Data Science Colloquium brought the competitors together to focus on one of the biggest challenges and opportunities in the industry.

Machine learning, one of the key ingredients of artificial intelligence, is giving the companies new superpowers to identify and guard against malicious attacks on their increasingly cloud-oriented products and services. The problem is that hackers are using many of the same techniques to take those attacks to a new level.

Dawn Song, UC Berkeley computer science and engineering professor.

“The challenge is that security is a very asymmetric game,” said Dawn Song, a UC Berkeley computer science and engineering professor who attended the event. “Defenders have to defend across the board, and attackers only need to find one hole. So in general, it’s easier for attackers to leverage these new techniques.”

That helps to explain why the competitors are teaming up.

“At this point in the development of this technology it’s really critical for us to move at speed to all collaborate,” explained Mark Russinovich, the Microsoft Azure chief technology officer. “A customer of Google is also likely a customer of Microsoft, and it does nobody any good or gives anybody a competitive disadvantage to keep somebody else’s customer, which could be our own customer, insecure. This is for the betterment of everybody, the whole community.”

[Editor’s Note: Russinovich is a keynoter at the GeekWire Cloud Tech Summit, June 27 in Bellevue, Wash.]

This spirit of collaboration is naturally more common in the security community than in the business world, but the colloquium at Microsoft has taken it to another level. GeekWire is the first media organization to go inside the event, although some presentations weren’t opened up to us, due in part to the sensitive nature of some of the information the companies shared.

The event, in its second year, grew out of informal gatherings between Microsoft and Google, which resulted in part from connections Siva Kumar made on long-distance runs with Google’s tech security experts. After getting approval from his manager, he brought one of the Google engineers to Microsoft two years ago to compare notes with his team.

The closing reception for the Security Data Science Colloquium at Microsoft this week. (GeekWire Photo / Todd Bishop)

Things have snowballed from there. After the first event, last year, Siva Kumar posted about the colloquium, describing it as a gathering of “security data scientists without borders.” As the word got out, additional companies asked to be involved, and Microsoft says this year’s event was attended by representatives of 17 different tech companies in addition to university researchers.

The event reflects a change in Microsoft’s culture under CEO Satya Nadella, as well as a shift in the overall industry’s approach. Of course, the companies are still business rivals that compete on the basis of beating each other’s products. But in years or decades past, many treated security as a competitive advantage, as well. That’s what has changed.

“This is not a competing thing. This is not about us trying to one up each other,” Siva Kumar said. “It just feels like, year over year, our problems are just becoming more and more similar.”

Siamac Mirzaie of Netflix presents at the event. (GeekWire Photo / Todd Bishop)

In one afternoon session this week, representatives from Netflix, one of Amazon Web Services’ marquee customers, gave detailed briefings on the streaming service’s internal machine learning tools, including its “Trainman” system for detecting and reporting unusual user activity.

Developing and improving the system has been a “humbling journey,” said Siamac Mirzaie from the Netflix Science & Analytics Team, before doing a deep dive on the technical aspects of Trainman.

Depending on the situation, he said, Netflix uses either Python, Apache Spark or Flink to bring the data into its system and append the necessary attributes to the data. It then uses simple rules, statistical models and machine learning models to detect anomalies using Flink or Spark, followed by a post-processing layer that uses a combination of Spark and Node.js. That’s followed by a program for visualizing the anomalies in a timeline that people inside the company can use to drill down into and understand specific events.

“The idea is to refine the various data anomalies that we’ve generated in the previous stage into anomalies that our application owner or security analyst can actually relate to,” Mirzaie said.

The stakes are high given the $8 billion that Netflix is expected to spend on content this year.

But the stakes might be even higher for Facebook. The social network, which has been in the international spotlight over misuse of its platform by outside companies and groups, says it uses a combination of automated and manual systems to identify fraudulent and suspicious activity.

Facebook, which held a similar event of its own in April, was among the companies that presented during the gathering at Microsoft this week. Facebook recently announced that it used new machine learning practices to detect more than 500,000 accounts tied to financial scams.

Mark Russinovich, Microsoft Azure CTO, in his conference room on the company’s Redmond campus this week. (GeekWire Photo / Todd Bishop)

During his keynote, Microsoft’s Russinovich talked in detail about Windows PowerShell, the command-line program that is a popular tool for attackers in part because it’s built into the system. Microsoft’s Windows Defender Advanced Threat Protection is designed to detect suspicious command lines, and Microsoft was previously using a traditional model that was trained to recognize potentially malicious sequences of characters.

“That only got us so far,” Russinovich said in an interview.

After brainstorming ways to solve the problem, the company’s security defense researchers figured out how to apply deep neural networks, more commonly used in vision-based object detection, for use in PowerShell malicious script detection, as well. They essentially came up with a way to encode command lines to make them look like images to the machine learning model, Russinovich explained. The result surpassed the traditional technique “by a significant amount,” he said.

At the closing panel discussion, David Seidman, Google security engineering manager, summed up the stated philosophy of the event. “We are not trying to compete on the basis of our corporate security,” Seidman said. “Google is not trying to get ahead of Microsoft in the cloud because Microsoft got compromised. That’s the last thing we want to see.”

“We are fighting common enemies,” Seidman added. “The same attackers are coming after all of us, and an incident at one company is going to affect that customer’s trust in all the cloud companies they do business with. So we have very much aligned interests here.”

EU institutes Kaspersky ban, calls software ‘malicious’

After the European Parliament voted to institute a Kaspersky ban on the use of its products in the European Union, Kaspersky Lab temporarily suspended its involvement with Europol and the No More Ransom project.

In a plenary session, the European Parliament voted on a cyberdefense strategy report written by Urmas Paet, the Estonian member of the European Parliament on the Committee on Foreign Affairs. The resolution included an amendment from the Polish MEP that “calls on the EU to perform a comprehensive review of software, IT and communications equipment and infrastructure used in the institutions in order to exclude potentially dangerous [programs] and devices, and to ban the ones that have been confirmed as malicious, such as Kaspersky Lab.”

The Kaspersky ban resolution was approved, with 476 votes to 151. Following the vote, Kaspersky announced it was freezing its cooperation with Europol and the No More Ransom project.

Kaspersky Lab was one of the first antivirus companies to collaborate with Europol law enforcement officials. The company is also one of the founding members of the No More Ransom project, which provides ransomware victims with free decryptors. The European Parliamentary Research Service had recently praised the work of the No More Ransom project.

“We have protected the EU for 20 years working with law enforcement leading to multiple arrests of cybercriminals,” Kaspersky Lab CEO Eugene Kaspersky tweeted after the vote, adding that the company is “forced to freeze” its cooperation with Europol and the No More Ransom project.

Other governments — including the United States, the United Kingdom, the Netherlands and Lithuania — have already taken steps to implement a Kaspersky ban on sensitive systems because of suspicions that the Moscow-based company does not work entirely independently from the Russian government and is, therefore, a security risk.

Kaspersky has denied all of these accusations and took to Twitter again this week to reiterate that the claims made by the European Parliament are unfounded.

“The risks of using our software are purely hypothetical. Just as hypothetical as with any other cybersecurity software of any country,” he tweeted, adding that the risk of cyberattacks is real and “extremely high.” He went as far as saying the European Parliament’s decision “plays for cybercrime.”

Kaspersky Lab has been trying to prove its innocence with measures such as its Global Transparency Initiative, which moves some of the company’s processes out of Russia and to Switzerland.

In other news:

  • Yahoo has been fined 250,000 pounds — approximately $331,000 — for its 2014 data breach. The United Kingdom Information Commissioner’s Office (ICO) investigated the more than 515,000 Yahoo user accounts affected by the breach in the U.K. and found Yahoo had violated the Data Protection Act 1998. The Yahoo U.K. Services branch of the company — which was purchased by Verizon and merged with AOL to form Oath — is responsible for the breached U.K. accounts. Overall, the massive data breach affected around 500 million users worldwide. The ICO found that Yahoo U.K. Services failed to take the appropriate measures to ensure its parent company, Yahoo Inc., complied with the correct data protection standards and failed to ensure the appropriate monitoring services were in use to protect users.
  • Equifax appointed Bryson Koehler as its new CTO this week. Koehler was previously the CTO of IBM Watson and Cloud Platform, as well as CTO and CIO of The Weather Co. “The world of AI is unlocking massive potential in how data can be used, and cloud-based AI technology is a game changer for developing secure and reliable data-driven products,” Koehler said in a statement announcing his new role. “I see tremendous opportunity for Equifax to become a leading data-driven technology company, and I’m excited to join its highly-talented team to bring new energy that accelerates Equifax’s transformation into a leader of insight forecasting.” Koehler’s appointment follows the massive data breach Equifax reported in September 2017, which affected at least 145 million consumers
  • Tenable Network Security has filed for an initial public offering (IPO), according to a report from The Information. Tenable filed for the IPO on June 11, which makes it the third cybersecurity company to go public so far this year, following Carbon Black and Zscaler. Both companies have reported growth since going public, with shares up 12% and 18%, respectively. Tenable plans to go public in late July, according to the report. The company makes cybersecurity software and is run by CEO Amit Yoran, who was previously the president of RSA. Reuters reported in March 2018 that the company hired the investment bank Morgan Stanley to prepare for the IPO. The report also said the IPO could put the value of Tenable between $1.5 and $2 billion.

Quantum computing applications creep forward

They are still largely products of the laboratory, but quantum computing applications may be reaching the point at which business leaders should begin to take notice.

Machine learning and AI are widely seen as fodder for future quantum approaches, but chemical simulation, cryptography and material science may be lining up first for the quantum treatment. In quantum computing application developments this week:

  • Volkswagen at the CEBIT technology show in Hannover, Germany, disclosed some successes in its work with Google on quantum computers for battery research.
  • Startup Strangeworks rolled out of stealth mode to describe plans to provide development tools for quantum computing aimed at the aerospace, pharmaceuticals, energy and finance industries.
  • IBM announced ACQUA, for Algorithms and Circuits for Quantum Applications. When used with a previously available IBM quantum information science kit for software developers, ACQUA software is intended to allow domain experts in fields like chemistry, optimization and AI to run existing algorithms from classical computing jobs on IBM quantum computers on the IBM Cloud.

The quantum computing applications have evoked excitement even though they have yet to certifiably surpass the best of conventional computers; some call that inflection point “quantum supremacy.”

The buzz has been triggered because quantum computing, based on murky-to-many atomic-scale quantum mechanics, could spur exponential increases in data processing. Someday, quantum approaches could blast past classical computers that support binary states of 0 and 1, by including additional states of 0 and 1, and 0 or 1.

Closer quantum view

Like other recent news on quantum computing applications, the work tends to be research-oriented. More than 20 years after it was demonstrated at the logical gate level, quantum computing still seems a ways off. But it may be getting closer.

Brian Hopkins, Forrester ResearchBrian Hopkins

Forrester analyst Brian Hopkins has a view on this. He has been tracking quantum computers for years, and admits it hasn’t been so close that business leaders had to have it on their radar. That may be changing, though there is still some “fudge factor” in his time estimate, he said.

“We may be in that two- to three-year time frame — or, definitely within five years — where we are going to have some real uses for specialized non-error corrected quantum computers in certain industries,” Hopkins said. “The leaders who are savvy enough to make the right investments could be in a position to reap first mover benefits.”

The leaders who are savvy enough to make the right investments could be in a position to reap first mover benefits.
Brian Hopkinsanalyst, Forrester Research

Elaborating on quantum computing applications, Hopkins said he divides quantum computers into specialized and universal systems.

The specialized variety can solve individual, practical problems. The universal type, like today’s general purpose computers, is meant to handle all kinds of problems. Error correction in the quantum domain is somewhat different than error correction in digital circuits

According to Hopkins, long-running jobs will stress quantum systems’ abilities to maintain stable qubits. Error correction for quantum computing applications, he has written, is meant to yield smaller numbers of fault-tolerant, stable and logical qubits from many physical qubits.

Tech developers should learn quantum computing

Tech leaders will have to begin to learn some basics of quantum computing, in order to follow the future progress of different types of quantum computing, Hopkins maintained. That is especially true in science-intensive industries.

In those areas, he said, leaders need “to follow the progress of the different types of quantum computers to understand when they might achieve supremacy in the domains where they have a business problem that could benefit.”

“For example, if you are in chemical manufacturing, and [researchers] hit supremacy in quantum chemistry — if a Google or an IBM can create some quantum algorithms that run on a quantum computer that can solve some theoretical quantum chemistry problems — you should know that,” Hopkins said.

Hopkins recently reported on the status of quantum computing applications in a Forrester blog.

Major tech companies getting into quantum

There is interest in machine learning, neural networks and AI among such quantum players as D-Wave Systems, Google, IBM and Microsoft. But Google, particularly, has AI on its mind when it comes to quantum endeavors, Hopkins said.

“The key to understanding why Google is doing this at all is to look at the name of its lab,” he said. “They call it the Google Quantum AI Lab. The reason for that is they believe the primary application for quantum computing is accelerating artificial intelligence.”

Quantum computing has a long history. With all the data needed to successfully train deep learning systems of the future, some exponential breakthroughs of the quantum kind could prove very helpful — eventually.

As Hopkins and other quantum aficionados might say: Watch this space.