Tag Archives: Startup

Apstra bolsters IBN with customizable analytics

Startup Apstra has added to its intent-based networking software customizable analytics capable of spotting potential problems and reporting them to network managers.

Apstra introduced this week intent-based analytics as part of an upgrade to the company’s Apstra Operating System (AOS). The latest version, AOS 2.1, also includes other enhancements, such as support for additional network hardware and the ability to use a workload’s MAC or IP address to find it in an IP fabric.

In general, AOS is a network operating system designed to let managers automatically configure and troubleshoot switches. Apstra focuses on hardware transporting Layer 2 and Layer 3 traffic between devices from multiple vendors, including Arista Networks, Cisco, Dell and Juniper Networks. Apstra also supports white-box hardware running the Cumulus Networks OS.

AOS, which can run on a virtualized x86 server, communicates with the hardware through installed drivers or the hardware’s REST API. Data on the state of each device is continuously fed to the AOS data store. Alerts are sent to network operators when the state data conflicts with how a device is configured to operate.

AOS 2.1 takes the software’s capabilities up a notch through tools that operators can use to choose specific data they want the Apstra analytics engine to process.

“This is a logical progression for Apstra with AOS,” said Brad Casemore, an analyst at IDC. “Pervasive, real-time analytics should be an integral element of any intent-based networking system.”

Using Apstra analytics

The first step is for operators to define the type of data AOS will collect. For example, managers could ask for the CPU utilization on all spine switches. Also, they could request queries of all the counters for server-facing interfaces and of the routing tables for links connecting leaf and spine switches.

Mansour Karam, CEO, ApstraMansour Karam

“If you were to add a new link, add a new server, or add a new spine, the data would be included automatically and dynamically,” Apstra CEO Mansour Karam said.

Once the data is defined, operators can choose the conditions under which the software will examine the information. Apstra provides preset scenarios or operators can create their own. “You can build this [data] pipeline in the way that you want, and then put in rules [to extract intelligence],” Karam said.

Useful information that operators can extract from the system include:

  • traffic imbalances on connections between leaf and spine switches;
  • links reaching traffic capacity;
  • the distribution of north-south and east-west traffic; and
  • the available bandwidth between servers or switches.

Enterprises moving slowly with IBN deployments

Other vendors, such as Cisco, Forward Networks and Veriflow, are building out intent-based networking (IBN) systems to drive more extensive automation. Analytics plays a significant role in making automation possible.

“Nearly every enterprise that adopts advanced network analytics solutions

is using it to enable network automation,” said Shamus McGillicuddy, an analyst at Enterprise Management Associates, based in Boulder, Colo. “You can’t really have extensive network automation without analytics. Otherwise, you have no way to verify that what you are automating conforms with your intent.”

Today, most IT staffs use command-line interfaces (CLIs) to manually program switches and scores of other devices that comprise a network’s infrastructure. IBN abstracts configuration requirements from the CLI and lets operators use declarative statements within a graphical user interface to tell the network what they want. The system then makes the necessary changes.

The use of IBN is just beginning in the enterprise. Gartner predicts the number of commercial deployments will be in the hundreds through mid-2018, increasing to more than 1,000 by the end of next year.

Atomist extends CI/CD to automate the entire DevOps toolchain

Startup Atomist hopes to revolutionize development automation throughout the application lifecycle, before traditional application release automation vendors catch on.

Development automation has been the fleeting goal of a generation of tools, particularly DevOps tools, that promise continuous integration and continuous delivery. The latest is Atomist and its development automation platform, which aims to automate as many of the mundane tasks as possible in the DevOps toolchain.

Atomist ingests information about an organization’s software projects and processes to build a comprehensive understanding of those projects. Then it creates automations for the environment, which use programming tools such as parser generators and microgrammars to parse and contextualize code.

The system also correlates event streams pulled from various stages of development and represents them as code in a graph database known as the Cortex. Because Atomist’s founders said they believe the CI pipeline model falls short, Atomist takes an event-based approach to model everything in an organization’s software delivery process as a stream of events. The event-driven model also enables development teams to compose development flows based on events.

In addition, Atomist automatically creates Git repositories and configures systems for issue tracking and continuous integration, and creates chat channels to consolidate notifications on the project and delivered information to the right people.

“Atomist is an interesting and logical progression of DevOps toolchains, in that it can traverse events across a wide variety of platforms but present them in a fashion such that developers don’t need to context switch,” said Stephen O’Grady, principal analyst at RedMonk in Portland, Maine. “Given how many moving parts are involved in DevOps toolchains, the integrations are welcome.”

Mik Kersten, a leading DevOps guru and CEO at Tasktop Technologies, has tried Atomist firsthand and calls it a fundamentally new approach to manage delivery. As these become increasingly complex, the sources of waste move well beyond the code and into the tools spread across the delivery pipeline, Kersten noted.

The rise of microservices, and tens or hundreds of services in their environments, introduce trouble spots as developers collaborate, deploy and monitor the lifecycle of these hundreds of services, Johnson said.

This is particularly important for security, where keeping services consistent is paramount. In last year’s Equifax breach, hackers gained access through an unpatched version of Apache Struts — but with Atomist, an organization can identify and upgrade old software automatically across potentially hundreds of repositories, Johnson said.

Atomist represents a new class of DevOps product that goes beyond CI, which is “necessary, but not sufficient,” said Rod Johnson, Atomist CEO and creator of the Spring Framework.

Rod Johnson, CEO, AtomistRod Johnson

Tasktop’s Kersten agreed that approach to developer-centric automation “goes way beyond what we got with CI.” The company created a Slack bot that incorporates Atomist’s automation facilities, driven by a development automation engine that is reminiscent of model-driven development or aspect-oriented programming, but provides generative facilities not only of code but across projects resources and other tools, Kersten said. A notification system informs users what the automations are doing.

Most importantly, Atomist is fully extensible, and its entire internal data model can be exposed in GraphQL.

Tasktop has already explored ways to connect Atomist to Tasktop’s Integration Hub and the 58 Agile and DevOps tools it currently supports, Kersten said.

Automation built into development

As DevOps becomes more widely adopted, integrating automation into the entire DevOps toolchain is critical to help streamline the development process so programmers can develop faster, said Edwin Yuen, an analyst at Enterprise Strategy Group in Milford, Mass.

The market to integrate automation and development will grow, as both the companies that use DevOps and the number of applications they develop increase.
Edwin Yuenanalyst, Enterprise Strategy Group

“The market to integrate automation and development will grow, as both the companies that use DevOps and the number of applications they develop increase,” he said. Atomist’s integration in the code creation and deployment process, through release and update management processes, “enables automation not just in the development process but also in day two and beyond application management,” he said.”

Atomist joins other approaches such as GitOps and Bitbucket Pipelines that target the developer who chooses the tools used across the complete lifecycle, said Robert Stroud, an analyst at Forrester Research in Cambridge, Mass.

“Selection of tooling such as Atomist will drive developer productivity allowing them to focus on code, not pipeline development — this is good for DevOps adoption and acceleration,” he said. “The challenge for these tools is although new code fits well, deployment solutions are selected within enterprises by Ops teams, and also need to support on-premises deployment environments.”

For that reason, look for traditional application release automation vendors, such as IBM, XebiaLabs and CA Technologies, to deliver features similar to Atomist’s capabilities in 2018, Stroud said.

Startup Liqid looks to make a splash in composable storage

Hardware startup Liqid is set to unveil a PCIe-based fabric switch that fluidly configures bare-metal servers from pools of physical compute, flash storage, graphical processing units and network devices.

Backed by $19.5 million in venture funding, the Lafayette, Colo., vendor said Liqid Composable Infrastructure (Liqid CI) is scheduled for general availability by March. Liqid CI integrates the Liqid Grid PCIe 3.0 switch and Liqid Command Center orchestration software on standard servers.

Liqid — pronounced “liquid” — has partnerships with flash memory maker Kingston Technology and Phison Electronics Corp., a Taiwanese maker of NAND flash memory controllers. Both vendors are seed investors.

Liqid CI is designed to scale the provisioning of disaggregated computing devices on bare-metal using Peripheral Component Interconnect Express (PCIe).

The Liqid Grid fabric deploys compute, GPUs, network cards and Kingston SSDs on a shared PCIe bus. The programmable storage architecture allows data centers to dynamically compose a computer system on the fly from disaggregated devices. Liqid Command Center configures the individual components on demand as an application needs it.

The idea is to allow an application to consume only the resources it needs. Once the tasks are completed, the device is released back to the global resource pool for other jobs.

“If you need more storage, you don’t send somebody with a cart to plug in more storage. You reprogram the fabric to suck in more storage from the interconnected pools,” said Sumit Puri, a Liqid co-founder and vice president of marketing.

Liqid and Orange Silicon Valley — the global telecomm provider’s U.S. arm — last November displayed a prototype device that can provide on-demand GPU performance for high-performance computing.

Camberley Bates, a managing director at Boulder, Colo., company Evaluator Group, said Liqid CI provides the ability to flexibly add or subtract computing devices to boost performance or control costs.

“You’re using straightforward x86 CPUs and SSDs. Pull all the pieces together and off you go,” Bates said.

Hewlett Packard Enterprise is considered the leader in the emerging composable infrastructure market with its Ethernet-based Synergy virtualization hardware platform. Composable infrastructure signals where converged and hyper-converged markets are headed, Bates said. 

“There is too much hardening and not enough flexibility in a converged environment,” Bates said. “There are only a few vendors doing composable systems now, but over the long term, we think it has legs.”

The Liqid Grid PCIe 3.0 managed switch scales to connect thousands of devices. Physical hardware interconnections can be copper or photonics over MiniHD SAS cabling, with 24 ports and up to 96 PCIe lane. Each port is rated for full duplex bandwidth of 8 gigabits per second.

Puri said Liqid is seeking OEM partners to design Liqid CI rack-scale systems with qualified servers. The earliest to sign on is Inspur, which markets a Liqid CI-based platform to offer graphical processing units (GPUs) as a service to data centers running large AI application farms.

Customers also can purchase a developer’s kit directly from Liqid that comes as a 6U rack with two nodes, four 6.4 TB SSDs, two network interface cards and two GPU cards for about $30,000.

StorOne attacks bottlenecks with new TRU storage software

Startup StorOne this week officially launched its TRU multiprotocol software, which its founder claims will improve the efficiency of storage systems.

The Israel-based newcomer spent six years developing Total Resource Utilization (TRU) software with the goal of eliminating bottlenecks caused by software that cannot keep up with faster storage media and network connectivity.

StorOne developers collapsed the storage stack into a single layer that is designed to support block (Fibre Channel and iSCSI), file (NFS, SMB and CIFS) and object (Amazon Simple Storage Service) protocols on the same drives. The company claims to support enterprise storage features such as unlimited snapshots per volume, with no adverse impact to performance.

TRU software is designed to run on commodity hardware and support hard disk drives; faster solid-state drives (SSDs); and higher performance, latency-lowering NVMe-based PCI Express SSDs on the same server. The software installs either as a virtual machine or a physical server.

StorOne CEO and founder Gal Naor said the TRU software-defined storage fits use cases ranging from high-performance databases to low-performance workloads, such as backup and data archiving.

‘Dramatically less resources’

“We need dramatically less resources to achieve better results. Results are the key here,” said Naor, whose experience in storage efficiency goes back to his founding of real-time compression specialist Storwize, which IBM acquired in 2010.

StorOne CTO Raz Gordon said storage software has failed to keep up with the speed of today’s drives and storage networks.

“We understood that the software is the real bottleneck today of storage systems. It’s not the drives. It’s not the connectivity,” said Gordon, who was the leading force behind the Galileo networking technology that Marvell bought in 2001.

The StorOne leaders are sparse on details so far about the product’s architecture and enterprise capabilities, beyond unlimited storage snapshots.

Marc Staimer, senior analyst at Dragon Slayer Consulting, said StorOne’s competition would include any software-defined storage products that support block and file protocols, hyper-converged systems, and traditional unified storage systems.

“It’s a crowded field, but they’re the only ones attacking the efficiency issue today,” Staimer said.

“Because of TRU’s storage efficiency, it gets more performance out of fewer resources. Less hardware equals lowers costs for the storage system, supporting infrastructure, personnel, management, power and cooling, etc.,” Staimer added. “With unlimited budget, I can get unlimited performance. But nobody has unlimited budgets today.”

StorOne user interface
TRU user interface shows updated performance metrics for IOPS, latency, I/O size and throughput.

Collapsed storage stack

The StorOne executives said they rebuilt the storage software with new algorithms to address bottlenecks. They claim StorOne’s collapsed storage stack enables the fully rated IOPS and throughput of the latest high-performance SSDs at wire speed.

“The bottom line is the efficiency of the system that results in great savings to our customers,” Gordon said. “You end up with much less hardware and much greater performance.”

StorOne claimed a single TRU virtual appliance with four SSDs could deliver the performance of a midrange storage system, and an appliance with four NVMe-based PCIe SSDs could achieve the performance and low latency of a high-end storage system. The StorOne system can scale up to 18 GBps of throughput and 4 million IOPS with servers equipped with NVMe-based SSDs, according to Naor. He said the maximum capacity for the TRU system is 15 PB, but he provided no details on the server or drive hardware.

“It’s the same software that can be high-performance and high-capacity,” Naor said. “You can install it as an all-flash array. You can install it as a hybrid. And you’re getting unlimited snapshots.”

Naor said customers could choose the level of disk redundancy to protect data on a volume basis. Users can mix and match different types of drives, and there are no RAID restrictions, he said.

StorOne pricing

Pricing for the StorOne TRU software is based on physical storage consumption through a subscription license. A performance-focused installation of 150 TB would cost 1 cent per gigabyte, whereas a capacity-oriented deployment of 1 PB would be $0.0006 per gigabyte, according to the company. StorOne said pricing could drop to $0.002 per gigabyte with multi-petabyte installations. The TRU software license includes support for all storage protocols and features.

StorOne has an Early Adopters Program in which it supplies free on-site hardware of up to 1 PB.

StorOne is based in Tel Aviv and also has offices in Dallas, New York and Singapore. Investors include Seagate and venture capital firms Giza and Vaizra. StorOne’s board of directors includes current Microsoft chairman and former Symantec and Virtual Instruments CEO John Thompson, as well as Ed Zander, former Motorola CEO and Sun Microsystems president.

VeloCloud SD-WAN user Brooks Brothers assesses VMware buy

The news of VMware’s intent to acquire software-defined WAN startup VeloCloud indicates a sign of the times, as the SD-WAN market continues to consolidate. While the acquisition might not have immediate repercussions for existing SD-WAN customers, it’s clear the shifting SD-WAN vendor landscape is forcing enterprises to consider how a potential sale of their current provider could affect their operations.

Case in point: Brooks Brothers. The New York-based retailer has some 300 locations using VeloCloud SD-WAN. Manny Stergakis, director of technical services, said while the clothier isn’t worried about the acquisition, he doesn’t want VMware to change VeloCloud SD-WAN.

“We would hope, obviously, [VMware] wouldn’t change the [VeloCloud] product and they would keep the support and the R&D they do for the product,” he said. He added that Brooks Brothers is currently a VMware shop, using VMware ESX hypervisor software and Dell products, as well.

“So, for us, I think [the acquisition] is a positive, because these are the vendors we use anyway,” he said.

Jim Duffy, an analyst at 451 Research, said he believes most companies using both VeloCloud and VMware software will experience few bumps along the way.

“If [customers] are already VMware shops, especially VMware NSX, they have to be feeling pretty good,” Duffy said in an email to SearchSDN. “This could potentially provide a consistent and uniform policy extension from the data center to the cloud, through the WAN.”

VeloCloud — like most vendors in similar situations — will most likely optimize its SD-WAN features for VMware environments, Duffy said. VeloCloud SD-WAN customers using Cisco products, however, could see their features capped out at the current level of capability. Cisco completed its acquisition of SD-WAN vendor Viptela this summer.

Duffy said both Cisco and VMware will try to use their SD-WAN acquisitions to persuade enterprises to unify their underlying network infrastructures on a single platform.

“It’s an opportunity for VMware to migrate those shops to NSX, and [it’s] also an opportunity for Cisco to migrate ACI [Application Centric Infrastructure] shops to Viptela,” he said. “Expect to see some incentives coming from these and other vendors in order to entice mixed-vendor environments to move one way or another.”

The right fit for Brooks Brothers

This could potentially provide a consistent and uniform policy extension from the data center to the cloud, through the WAN.
Jim Duffyanalyst at 451 Research

For Brooks Brothers, its adoption of VeloCloud SD-WAN was fueled by MPLS connectivity limitations it faced as it tried to bring up new business applications.

“As new applications were coming out on the business side for our stores — as well as our corporate locations — they had a lot more need for bandwidth, and the legacy MPLS wasn’t cutting it,” Stergakis said. Cost was another consideration.

These factors led Stergakis to look into broadband connectivity for the company’s locations. But with more than 500 company sites worldwide, this presented potential support issues.

“I didn’t want to have my network guys support 300 to 400 tunnels and firewalls all over the U.S. and the globe,” Stergakis said.

Brooks Brothers turned its attention to SD-WAN, which would allow the retailer to use broadband as the primary link, with cellular connectivity as backup. After looking at other SD-WAN startups, which Stergakis said he deemed immature, the retailer ran a successful pilot with VeloCloud SD-WAN.

“We were looking for a product that was easy to use,” he said. “Take it out of the box, set it down, configure it and send it out. That’s what we were able to achieve with VeloCloud.”

The design and prep work for the SD-WAN implementation took a couple of months, according to Stergakis.

“Obviously, when you introduce something new to the environment, we weren’t going to be converting everything over in one day,” he said. “We really had to scope out the traffic — the old traffic, the new traffic, what was going to happen, where it was going — to accommodate the routes everywhere.”

Brooks Brothers designed its SD-WAN with broadband or DSL as primary — depending on availability and location — and Long Term Evolution from Cradlepoint as the backup. MPLS is still used, but only minimally, Stergakis said, for larger corporate locations.

Brooks Brothers has already deployed VeloCloud SD-WAN in its Australian and Hong Kong locations, as well as the majority of its North American sites. Once the company completes the remaining six North American implementations, Stergakis said locations in Japan and Europe are next. Then, the company will look at the remaining sites using MPLS and determine how to move forward from there.

“Like us, I’m sure many people out there look at what’s on the market and what they have today,” he said. “You have to look at the appliance and the configuration, the ease of use, the effort the companies are putting into it, and then you make the right choice.”

Komprise adds data lifecycle management, deeper analytics

Startup Komprise updated its data management software to enable customers to move data multiple times under a single policy, perform finer-grained analytics on metadata and confine access to obsolete or unwanted data.

The Komprise Intelligent Data Management product works across NFS-, SMB- and CIFS-based file storage, as well as cloud and on-premises object storage. Komprise Observer software runs locally in one or more server virtual machines, gathering information on data and storage assets and sending summary information to out-of-band Director controller software.

Komprise Director, which runs on premises or as a cloud service, aggregates the data and displays metrics, such as data usage, capacity, access frequency, growth rate and potential savings if the user tiers to less expensive storage. Customers can test what-if scenarios through the Director dashboard and set policies to manage the data placement the Observer software executes.

Komprise’s users have always been able to set a policy to move data from one storage system to another. But if they wanted to move data two or more times, they had to set a separate policy for each migration.

With the new Komprise Intelligent Data Management 2.6 release, customers can set a policy to manage data sets throughout their lifecycle. For instance, a user could move data from file storage to on-premises object storage, shift the data to Amazon Simple Storage Service (S3) after six months, and to Amazon’s colder and less expensive Glacier storage service after a year.

Komprise can migrate data to any NFS, SMB and CIFS file storage, and to object storage that supports Amazon’s S3 API. Komprise cloud storage options are Amazon S3 and Glacier, and Google Nearline and Coldline storage. Komprise expects to support Microsoft Azure cloud storage later this quarter.

‘Compelling’ Amazon Glacier use case

“The capability to archive to S3 and then after six more months of inactivity move to something really cold, like Glacier, is a compelling use case and hurts the tape guys a little, because Glacier is getting so competitive to tape now price-wise,” said George Crump, founder and president of Storage Switzerland LLC. “Glacier has a different set of APIs, so most companies are slow to support it.”

Also in the 2.6 version, Komprise is making available deeper analytics capabilities as a beta release. Customers can write custom queries and apply filters to the metadata to gain granular insight. Komprise enabled the new deeper analytics through its distributed architecture without a central database or in-memory state, according to Krishna Subramanian, the Campbell, Calif., company’s COO.

Subramanian said customers could use the deeper analytics to find “zombie data” created by former employees, or to locate information specific to a company project. Komprise expects to finalize the deeper analytics capabilities later this year or early next year, she said.

Komprise analytics software interface
Komprise analytics helps find ‘zombie data’ created by former employees and locate information specific to a company project.

PacBio anxious to use deeper analytics

Customer Pacific Biosciences (PacBio) plans to use the new analytics options to help employees locate information about laboratory experiments. The Menlo Park, Calif., genomics sequencing manufacturer has a controlled laboratory information management system (LIMS). But Jay Smestad, senior director of IT at PacBio, said the LIMS organizes information based on sample or experiment numbers, rather than descriptive metadata tags.

Smestad said he also expects to take advantage of Komprise’s new data lifecycle management capabilities to move data between the company’s storage tiers of fast solid-state drives (SSDs), cheaper hard disk drives, slower backup tapes and Amazon S3 for disaster recovery.

“It’s exactly what we need to manage our data. We don’t want to have to touch the data multiple times. Every time we touch the data, it costs us money,” Smestad said. “We want to set a policy for a [data] share, and it should do the right thing automatically for years to come.”

Smestad said he might set a policy to keep hot data on SSDs and move files that aren’t accessed for six months to PacBio’s primary disk tier, and later to cheaper disk and eventually tape. PacBio currently stores about 7 PB of data and faces data growth of about 70% per year, Smestad said.

Confinement feature targets GDPR

Komprise had General Data Protection Regulation (GDPR) compliance in mind with the addition of new capabilities to confine unwanted or obsolete data. Subramanian said Komprise added the feature based on direct customer feedback, as companies face a May 2018 deadline to comply with the European Union’s regulation that gives customers the “right to be forgotten.” GDPR applies to any organization that does business in EU countries.

The Komprise software can move data outside the user and application namespace to a location that is inaccessible, Subramanian said. She said customers could test to ensure there are no dangling pointers to the data before its final deletion.

“The ability to test the impact of data removal is really smart” to make sure applications don’t break, Crump said.

Komprise’s data management competition includes Caringo, Data Dynamics, ioFabric, Primary Data and StrongBox Data, according to Crump. He said they all take different technology approaches, so customers would need to determine what they want to accomplish to figure out which one makes the most sense for them.

Komprise offers monthly subscription and perpetual licensing options for its Intelligent Data Management software. The subscription list price is $7 per terabyte, per month to manage, analyze, move and replicate data, and the perpetual software license is $150 per terabyte.

A startup touts easy-to-use encryption as key to IT security

Can a startup out of Boston convince the workforce to use encryption to safeguard information? 

PreVeil, which as of early fall 2017 had approximately 1,000 users and upward of 40 companies trying out its beta version, believes it can — by recognizing a stubborn fact about workers: Most of us can’t be bothered about securing the reams of data we transmit digitally every day. 

As Randy Battat, PreVeil’s co-founder, president and CEO, pointed out, even in the face of massive data breaches, employees continue to flout basic security best practices, failing to safeguard passwords or change them frequently.

“One goal of PreVeil is to make encryption and data protection so easy that people use encryption for everyday things, as opposed to very specialized business applications,” Battat said.

Randy Battat, co-founder, president, CEO, PreVeilRandy Battat

It’s not only employee security practices that are not up to snuff. Software and hardware continue to have vulnerabilities that attackers continue to exploit, he added. And most IT organizations persist in deploying traditional defenses, such as firewalls and access controls, to combat the growing sophistication of bad actors. Even those companies using encryption usually aren’t going far enough, Battat argued, because they use encryption only part of the time; for example, they encrypt sensitive data in transit, but not at rest.

Battat said PreVeil began with the assumption that any and all servers can be hacked, and IT security software needs to be easy to use. The result, he touted, is a new application for end-to-end encrypted emails, file sharing and storage that can withstand the inevitable attack, yet be easy to apply universally to sensitive data, he said. (See sidebar, for features.)

Use of encryption creeps up

How effortless must easy be to break down what, thus far, has been a resistance by businesses to use encryption?

One goal of PreVeil is to make encryption and data protection so easy that people use encryption for everyday things, as opposed to very specialized business applications.
Randy Battatco-founder, president and CEO at PreVeil

Certainly, enterprise attention to cybersecurity and, consequently, the use of security tools is increasing. Technology research firm Gartner has predicted that worldwide spending on information security products and services will reach $86.4 billion in 2017, a 7% increase over 2016 spending, and will hit $93 billion in 2018.

And part of that upward trend is the use of encryption technology.

Open source community Mozilla earlier this year reported that the average volume of encrypted web traffic on its open source web browser Firefox moved over the 50% mark, surpassing the average unencrypted volume.

Meanwhile, the “2017 Global Encryption Trends Study” released in April found that 41% of the respondents said their company “has an encryption strategy that is applied consistently across the enterprise,” up from 37% two years ago. Only 14% of respondents said their organization does not have an encryption strategy. The study, sponsored by Thales e-Security and independently conducted by Ponemon Institute, polled 4,802 individuals spanning 11 countries.

So does the prevalence — if inconsistent application — of encryption strategies signal that widespread adoption of encryption technology is just around the bend?

Use of encryption, daunting for many

No analysts are covering PreVeil yet, so cybersecurity and encryption experts said they were unable to speak specifically about its technology, whether its functions represented improvements over existing encryption products and whether PreVeil could successfully compete in an already robust market.

Garrett Bekker, principal analyst on the information security team at 451 Research, said most encryption vendors promise to make encryption easy, and they generally do have features that offer improvements over earlier generations of this technology.

Garrett Bekker, principal analyst, 451 ResearchGarrett Bekker

“There are companies out there who have made claims that they’ve made it easier to use encryption, and they’re valid. But it can still be a pain in the neck to use,” he explained, saying that asking users to take even just one extra step can be too much. “It may seem trivial, but [many users see it as] inconvenient any time you have to ask someone to click on this or select this drop-down.”

Bekker said other barriers remain to more widespread adoption.

“Generally, there are some forms of costs to doing encryptions — either hard costs or soft costs, such as inconveniencing users, disrupting workflows or adding latency. And you can actually interfere with the functionality of applications,” he said. Encryption also can make searching stored data and archived data problematic.

“It’s not to say those are problems that can’t be solved, but it creates some challenges,” Bekker said.

Moreover, he said encryption vendors have yet to help organizations get over one of their most vexing challenges: how to begin.

“Companies might have petabytes of data and tons of databases. They have data they don’t even know about and unstructured data like Word files scattered all over the place. They don’t know where to start,” he said.

Some organizations start by running discovery scans to identify sensitive information that should be encrypted, Bekker explained, but even then most companies still view establishing an encryption program as a daunting task.

Ron Culler, CTO of Secure Designs Inc., a managed internet security solutions firm in Greensboro, N.C., said he sees many companies that are reluctant to broadly use encryption technologies despite the wide availability of technology available. They’ll use it for specific types of data or in certain areas of the business, but cost and complexity often keep companies from using it more extensively.

Culler said companies are also hesitant because it can be complicated to implement and cumbersome for the business to use. Many companies also don’t have the skills sets on staff to implement and manage it, even though today’s technology isn’t as resource-intensive as it once was.

He also noted that it’s possible for encryption to allow in malicious code, which won’t be detected until it’s unencrypted. “If you don’t have visibility into what’s being sent, when it executes, it’s possible you could execute something malicious,” he explained, saying it’s a scenario that can deter more widespread use of the technology.

Plus, encryption generally won’t stop rogue employees who deliberately leak data or careless employees who go around policies and thereby expose sensitive information, either, he said.

Considering all this, Culler said businesses are right to see encryption as “a solid piece of security policy,” but one that needs to be considered as part of an enterprise-wide program that addresses where it’s really needed based on cost, complexity and risk.

Battat acknowledged that PreVeil’s technology is not a panacea. It will not prevent someone from accessing information on a lost or stolen device that’s not protected by passwords, access controls and the like. And it doesn’t prevent users from forgoing the use of its encryption technology. Still, the PreVeil team is convinced there is huge upside in encryption technology that’s easy to use.

“Of all the things that go on in business, very, very little is encrypted, Battat said. “Encryption ought to work with the way you work today, and so maybe — if it was really easy — we could go instead to the vast majority of what happens in business being encrypted.” The company plans to release its commercial version during the fourth quarter.

Veriflow premieres intent-based network verification tool

Networking startup Veriflow has premiered four new features to improve its intent-based network capabilities. Drew Conry-Murray, writing in Packet Pushers, discussed the intent-based network features that include a tool it calls Automated Intent Inference, which flags conflicting rules. Veriflow CloudPredict (only operational on Amazon AWS), uses APIs to pull information about customer networks from Amazon Virtual Private Cloud to visualize traffic, while Preflight and Dynamic Diff allow users to test network changes against models and compare snapshots of network models. The new intent-based network capabilities stem from mathematical formal verification and data that Veriflow harvests from access control lists, forwarding tables, routers, switches, load balancers and firewalls.

Conry-Murray sees Veriflow taking a different approach to intent-based networking, allowing a user to ask the software to confirm whether the network is configured as imagined. “This modeling approach makes sense to me. … It can provide a global view of a complex system and is geared toward generating actionable insight, not just reams of data that it’s up to you to parse. I also like that it’s built for brownfield networks. That is, it’s designed to work with your network as it is, not as you might like it to be,” Conry-Murray said. He added that the new Veriflow offering will need more testing to determine its best uses.

Read more of Conry-Murray’s thoughts on Veriflow.

Boosting BGP convergence

Ivan Pepelnjak, blogging in IP Space, examined the rationale of boosting Border Gateway Protocol (BGP) convergence without altering BGP timers. Pepelnjak said reducing the timers might be a benefit — especially with Cisco IOS, which he said he believes are too high already.

Among directly connected IP addresses, most BGP interfaces can detect the loss of a neighboring router almost immediately, as soon as the interface goes down. Pepelnjak recommends Bi-directional Forwarding Detection (BFD) as a lightweight protocol, preferable to routing protocol timers, to detect External BGP failures. He added that some platforms support BFD for directly connected Internal BGP neighbors, while others support all IBGP neighbors regardless of their connection type. “Speaking of IBGP, it doesn’t really matter if you lose an IBGP session or two as long as the next hop (where you’re supposed to send the traffic to) is reachable. Platforms that have BGP next hop tracking solve that problem quite nicely as they tie BGP route selection to (usually IGP-derived) next hop reachability in main IP routing table,” Pepelnjak said.

Dig deeper into Pepelnjak’s thoughts on BGP convergence.

Cybersecurity spending ROI

Jon Oltsik, an analyst at Enterprise Strategy Group in Milford, Mass., explored the (ROI) from cybersecurity spending. Data gathered in an ESG survey of 412 IT professionals indicated that 30% of respondents were hampered by total cost of ownership, while 33% said that spending on security operations will increase. According to Oltsik, the data indicates that CIOs are very willing to “throw money” at potential vulnerabilities, but demand that CISOs provide metrics that indicate that new security measures will be successful.

To improve cybersecurity metrics, Oltsik recommends creating a security operations and analytics integration plan, unifying security and IT teams, implementing process automation and bringing to bear advanced analytics. “As CISOs move forward with these initiatives, they should continuously determine how to measure and report incremental and ongoing advancement they achieve with risk management, security efficacy and operational efficiency,” Oltsik said. “Successful CISOs will be the ones who can demonstrate and communicate real and honest progress anytime they are asked to do so,” he added.

Explore more of Oltsik’s thoughts on cybersecurity ROI.

Writable shared flash volumes highlight E8 Storage upgrade

Startup E8 Storage has sharpened the focus of its nonvolatile memory express all-flash arrays, adding support for parallel file systems in a bid to boost scalability and shared flash storage.

The upgrade allows users to scale capacity beyond a single appliance by allowing host machines to access multiple E8 Storage appliances. The enhanced E8 Storage software supports shared writable volumes, which the vendor claims allows 96 clustered hosts to read and write to the same volume in parallel at line speed. That feature is geared initially to organizations running IBM Spectrum Scale — formerly IBM General Parallel File System — and Oracle Real Application Cluster (RAC) environments, although shared flash has implications for any parallel file system used in technical computing.

The vendor this week also previewed E8-X24 block arrays at the Flash Memory Summit in Santa Clara, Calif. The X-24 is a companion to the flagship E8 Storage D-24 rack-scale flash system that it launched last year. The X-24 will allow customers to mix and match NAND flash and storage-class memory in the same box. E8 Storage said X-24 proofs of concept are underway at cloud providers, financial services and travel industry firms. The X-24 array is expected to be generally available in the fourth quarter.

“The focus of this release is to increase the agility of our system for application acceleration. We’re supporting more parallel file architectures to help customers get the most processing power and move away from serial access to data,” said Julie Herd, director of technical marketing for E8 Storage.

Shared writable volumes connect multiple hosts to back end

The nonvolatile memory express (NVMe) host controller interface is designed to speed data transfer between host systems and flash media. The NVMe protocol transmits the packets across the PCI Express interconnect, bypassing the traditional network hops between networking components.

The E8 Storage shared flash block system uses dual-ported server and rack hardware from OEM AIC Inc. It supports 24 7.68 TB capacities, which scales storage to 140 TB of usable flash per rack. Drives connect via a Remote Direct Memory Access over Converged Ethernet high-performance fabric. E8 client software handles dynamic LUN, RAID 6 schemes and thin provisioning.

Although the concept of sharing a volume isn’t a new idea, supporting it with block storage is a challenge. It requires vendors to enable software capabilities in the storage layer, particularly a locking mechanism to allow clustered servers to simultaneously read and write results to the same volume, without interfering with one another.

In its rack-scale deployment, each server sees E8 Storage servers as local block storage. A parallel file system writes data to those servers at the host level. The E8 agent responds to lock calls to prevent data collisions, as multiple hosts attempt to access the volume in real time.

“This was one of the early-on requests we had from customers: the ability to have read and write access to shared flash. We’ve had it in test with IBM Spectrum Scale for a couple months. Now, we’re ready to launch,” Herd said.

Eric Burgener, a storage analyst with IT firm IDC, said E8 Storage offers a potential alternative to the Oracle Exadata in-memory product that supports large Oracle RAC deployments, which require underlying high-performance storage. Oracle does not have an end-to-end NVMe implementation for Exadata.

“For a company the size of E8 Storage, selling even 10 systems in a year into Oracle RAC environments would be a pretty big deal. They have a better performance than Oracle Exadata and cost about one-third less. Now is the time for E8 to get into those environments that will be looking to refresh every quarter,” Burgener said.

Other potential use cases for E8 to pursue involve parallel file-system-based technical computing for big data, fraud detection, life sciences, online transaction processing and seismic processing, Burgener said.

Choose between flash, SCM, with dedicated RAID

Herd said E8 Storage is testing the forthcoming X-24 array with Intel’s Optane-based storage-class memory SSDs. The Optane drives provide a persistent memory cache designed to mimic the performance of an in-memory database.

Rather than an in-memory cluster accessing servers across a network, E8 said its architecture provides better scalability by eliminating dedicated storage into the servers. Dedicated network links ensure each tier of storage gets sufficient bandwidth.

One feature lacking is dynamic tiering between shared flash and storage-class memory. Herd said E8 Storage customers will have to determine which database apps require in-memory-like performance.

The upgrade allows host to access multiple E8 Storage appliances. Initially, customers could connect 96 host servers to the appliance. The new configuration allows NAND flash and Intel Optane SSDs to be shared across D-24 and X-24 arrays. Instead of one large RAID configuration, customers could create smaller, multiple RAID groups and dedicate each to a specific cluster.

E8 Storage is among a handful of startup vendors trying to peddle fast and scalable, shared flash storage using off-the-shelf NVMe drives. Other entrants include Apeiron Data Systems and software-defined Excelero. Two other hopefuls, Pavilion Data Systems and Vexata, have yet to formally unveil their storage gear.

Configure your app to start at log-in

For a long time, desktop PC users have been able to configure Win32 apps to start at startup or user log-in. This has also been possible for Desktop Bridge apps since the Windows 10 Anniversary Update (v10.0.14393.0). We’ve now extended this feature to allow regular Universal Windows Apps to take part in this also. This is available in Insider builds from Build 16226 onwards, along with the corresponding SDK. In this post, we’ll look at the code changes you need to make in your manifest and in your App class to handle the startup scenario, and how your app can work with the user to respect their choices.

Here’s a sample app, called TestStartup – the app offers a button to request enabling the startup behavior, and reports current status. Typically, you’d put this kind of option into a settings page of some kind in your app.

The first thing to note is that you must use the windows.startupTask Extension in your app manifest under the Extensions node, which is a child of the Application node. This is documented here. The same Extension declaration is used for both Desktop Bridge and regular UWP apps – but there are some differences.

  • Desktop Bridge is only available on Desktop, so it uses a Desktop-specific XML namespace. The new UWP implementation is designed for use generally on UWP, so it uses a general UAP namespace (contract version 5) – although to be clear, it is currently still only actually available on Desktop.
  • The Desktop Bridge EntryPoint must be “Windows.FullTrustApplication,” whereas for regular UWP it is the fully-qualified namespace name of your App class.
  • Desktop Bridge apps can set the Enabled attribute to true, which means that the app will start at startup without the user having to manually enable it. Conversely, for regular UWP apps this attribute is ignored, and the feature is implicitly set to “disabled.” Instead, the user must first launch the app, and the app must request to be enabled for startup activation.
  • For Desktop Bridge apps, multiple startupTask Extensions are permitted, each one can use a different Executable. Conversely, for regular UWP apps, you would have only one Executable and one startupTask Extension.
Desktop Bridge App UWP App

xmlns:desktop="http://schemas.microsoft.com/
appx/manifest/desktop/windows10"


xmlns:uap5="http://schemas.microsoft.com/
appx/manifest/uap/windows10/5"


<desktop:Extension
  Category="windows.startupTask"
  Executable="MyDesktopBridgeApp.exe"
  EntryPoint="Windows.FullTrustApplication">
  <desktop:StartupTask
    TaskId="MyStartupId"
    Enabled="false"
    DisplayName="Lorem Ipsum" />
</desktop:Extension>


<uap5:Extension
  Category="windows.startupTask"
  Executable="TestStartup.exe"
  EntryPoint="TestStartup.App">
  <uap5:StartupTask
    TaskId="MyStartupId"
    Enabled="false"
    DisplayName="Lorem Ipsum" />
</uap5:Extension>

For both Desktop Bridge apps and regular UWP apps, the user is always in control, and can change the Enabled state of your startup app at any time via the Startup tab in Task Manager:

Also for both app types, the app must be launched at least once before the user can change the Disabled/Enabled state. This is potentially slightly confusing: if the user doesn’t launch the app and then tries to change the state to Enabled in Task Manager, this will seem to be set. However, if they then close Task Manager and re-open it, they will see that the state is still Disabled. What’s happening here is that Task Manager is correctly persisting the user’s choice of the Enabled state – but this won’t actually allow the app to be activated at startup unless and until the app is launched at least once first – hence the reason it is reported as Disabled.

In your UWP code, you can request to be enabled for startup. To do this, use the StartupTask.GetAsync method to initialize a StartupTask object (documented here) – passing in the TaskId you specified in the manifest – and then call the RequestEnableAsync method. In the test app, we’re doing this in the Click handler for the button. The return value from the request is the new (possibly unchanged) StartupTaskState.


async private void requestButton_Click(object sender, RoutedEventArgs e)
{
    StartupTask startupTask = await StartupTask.GetAsync("MyStartupId");
    switch (startupTask.State)
    {
        case StartupTaskState.Disabled:
            // Task is disabled but can be enabled.
            StartupTaskState newState = await startupTask.RequestEnableAsync();
            Debug.WriteLine("Request to enable startup, result = {0}", newState);
            break;
        case StartupTaskState.DisabledByUser:
            // Task is disabled and user must enable it manually.
            MessageDialog dialog = new MessageDialog(
                "I know you don't want this app to run " +
                "as soon as you sign in, but if you change your mind, " +
                "you can enable this in the Startup tab in Task Manager.",
                "TestStartup");
            await dialog.ShowAsync();
            break;
        case StartupTaskState.DisabledByPolicy:
            Debug.WriteLine(
                "Startup disabled by group policy, or not supported on this device");
            break;
        case StartupTaskState.Enabled:
            Debug.WriteLine("Startup is enabled.");
            break;
    }
}

Because Desktop Bridge apps have a Win32 component, they run with a lot more power than regular UWP apps generally. They can set their StartupTask(s) to be Enabled in the manifest and do not need to call the API. For regular UWP apps, the behavior is more constrained, specifically:

  • The default is Disabled, so in the normal case, the user must run the app at least once explicitly – this gives the app the opportunity to request to be enabled.
  • When the app calls RequestEnableAsync, this will show a user-prompt dialog for UWP apps (or if called from a UWP component in a Desktop Bridge app from the Windows 10 Fall Creators Update onwards).
  • StartupTask includes a Disable method. If the state is Enabled, the app can use the API to set it to Disabled. If the app then subsequently requests to enable again, this will also trigger the user prompt.
  • If the user disables (either via the user prompt, or via the Task Manager Startup tab), then the prompt is not shown again, regardless of any requests from the app. The app can of course devise its own user prompts, asking the user to make manual changes in Task Manager – but if the user has explicitly disabled your startup, you should probably respect their decision and stop pestering them. In the sample code above, the app is responding to DisabledByUser by popping its own message dialog – you can obviously do this if you want, but it should be emphasized that there’s a risk you’ll just annoy the user.
  • If the feature is disabled by local admin or group policy, then the user prompt is not shown, and startup cannot be enabled. The existing StartupTaskState enum has been extended with a new value, DisabledByPolicy. When the app sees DisabledByPolicy, it should avoid re-requesting that their task be enabled, because the request will never be approved until the policy changes.
  • Platforms other than Desktop that don’t support startup tasks also report DisabledByPolicy.

Where a request triggers a user-consent prompt (UWP apps only), the message includes the DisplayName you specified in your manifest. This prompt is not shown if the state is DisabledByUser or DisabledByPolicy.

If your app is enabled for startup activation, you should handle this case in your App class by overriding the OnActivated method. Check the IActivatedEventArgs.Kind to see if it is ActivationKind.StartupTask, and if so, case the IActivatedEventArgs to a StartupTaskActivatedEventArgs. From this, you can retrieve the TaskId, should you need it. In this test app, we’re simply passing on the ActivationKind as a string to MainPage.


protected override void OnActivated(IActivatedEventArgs args)
{
    Frame rootFrame = Window.Current.Content as Frame;
    if (rootFrame == null)
    {
        rootFrame = new Frame();
        Window.Current.Content = rootFrame;
    }

    string payload = string.Empty;
    if (args.Kind == ActivationKind.StartupTask)
    { 
        var startupArgs = args as StartupTaskActivatedEventArgs;
        payload = ActivationKind.StartupTask.ToString();
    }

    rootFrame.Navigate(typeof(MainPage), payload);
    Window.Current.Activate();
}

Then, the MainPage OnNavigatedTo override tests this incoming string and uses it to report status in the UI.


protected override void OnNavigatedTo(NavigationEventArgs e)
{
    string payload = e.Parameter as string;
    if (!string.IsNullOrEmpty(payload))
    {
        activationText.Text = payload;

        if (payload == "StartupTask")
        {
            requestButton.IsEnabled = false;
            requestResult.Text = "Enabled";
            SolidColorBrush brush = new SolidColorBrush(Colors.Gray);
            requestResult.Foreground = brush;
            requestPrompt.Foreground = brush;
        }
    }
}

Note that when your app starts at startup, it will start minimized in the taskbar. In this test app, when brought to normal window mode, the app reports the ActivationKind and StartupTaskState:

Using the windows.startupTask manifest Extension and the StartupTask.RequestEnableAsync API, your app can be configured to start at user log-in. This can be useful for apps which the user expects to use heavily, and the user has control over this – but it is still a feature that you should use carefully. You should not use the feature if you don’t reasonably expect the user to want it for your app – and you should avoid repeatedly prompting them once they’ve made their choice. The inclusion of a user-prompt puts the user firmly in control, which is an improvement over the older Win32 model.

Sample Code here.