Tag Archives: Startup

Ytica acquisition adds analytics to Twilio Flex cloud contact center

Twilio has acquired the startup Ytica and plans to embed its workforce optimization and analytics software into Twilio Flex, a cloud contact center platform set to launch later this year. Twilio will also sell Ytica’s products to competing contact center software vendors.

Twilio declined to disclose how much it paid for Ytica, but said the deal wouldn’t significantly affect its earnings in 2018. Twilio plans to open its 17th branch office in Prague, where Ytica is based.  

The acquisition comes as AI analytics has emerged as a differentiator in the expanding cloud contact center market and as Twilio — a leading provider of cloud-based communications tools for developers — prepares for the general release of its first prebuilt contact center platform, Twilio Flex.

Founded in 2017, Ytica sells a range of real-time analytics, reporting and performance management tools that contact center vendors can add to their platforms. In addition to Twilio, Ytica has partnerships with Talkdesk and Amazon Connect that are expected to continue.

Twilio is targeting Twilio Flex at large enterprises looking for the flexibility to customize their cloud contact centers. The platform launched in beta in March and is expected to be commercially released later this year.

The vendor’s communications platform as a service already supports hundreds of thousands of contact center agents globally. Twilio Flex places those same developer tools into the shell of a contact center dashboard preconfigured to support voice, text, video and social media channels.

The native integration of Ytica’s software should boost Twilio Flex’s appeal as businesses look for ways to save money and increase sales by automating the monitoring and management of contact center agents. 

Ytica’s portfolio includes speech analytics, call recording search, and real-time monitoring of calls and agent desktops. Businesses could use the technology to identify customer trends and to give feedback to agents.

Contact center vendors tout analytics in cloud

The marketing departments of leading contact center vendors have placed AI at the center of sales pitches this year, even though analysts say much of the technology is still in the early stages of usefulness.

This summer, Google unveiled an AI platform for building virtual agents and automating contact center analytics. Twilio was one of nearly a dozen vendors to partner with Google at launch, along with Cisco, Genesys, Mitel, Five9, RingCentral, Vonage, Appian and Upwire.

Within the past few months Avaya and Nice inContact have also updated their workforce optimization suites for contact centers with features including speech analytics and real-time trend reporting.

Enterprise technology buyers say analytics will be the most important technology for transforming customer experiences in the coming years, according to a recent survey of 700 IT and business leaders by Nemertes Research Group Inc., based in Mokena, Ill.

Qlik-Podium acquisition aims to boost BI data management

Qlik is buying startup Podium Data. The Qlik-Podium acquisition gives the self-service BI and data visualization software vendor new data management technology to boost its enterprise strategy and its ability to compete with archrival Tableau.

As part of the Qlik-Podium deal, Podium Data will move all 30 of its employees — including the co-founders and management team — from its Lowell, Mass., headquarters to Qlik’s regional office in Newton, Mass. Financial terms weren’t disclosed.

Podium will be a wholly owned subsidiary of Qlik and operate as a separate business unit, though with tighter connections to the Qlik platform to provide expanded BI data management capabilities, according to Drew Clarke, senior vice president of strategy management at Qlik.

Podium’s namesake technology, which automates data ingestion, validation, curation and preparation, will remain open and able to integrate with other vendors’ BI and analytics platforms, Podium CEO Paul Barth said.

Qlik on the rebound?

The Qlik-Podium purchase is part of Qlik’s effort to rebound from business problems that led to it being bought and taken by private equity firm Thoma Bravo in 2016. Multiple rounds of layoffs and a management change ensued, and Qlik lagged somewhat behind both Tableau and Microsoft Power BI in marketing and product development.

“It’s part of an acceleration of our vision,” Clarke said of the acquisition in a joint interview with Barth at the Newton office. “When we looked at what’s going on in big data in terms of volume of data and the management of that and making it accessible and analytics-ready, we felt that the Podium data solution was a great fit.”

While Clarke maintained that Qlik has been “enterprise-class” rather than department-oriented for some time, he also said Podium’s data management technology gives Qlik the ability to scale up and manage larger volumes of data.

Clarke said Podium’s technology is complementary to Qlik’s Associative Big Data Index, a system expected to be released later this year — that will index the data in Hadoop clusters and other big data platforms for faster access by Qlik users. “Podium can be used to prepare data files, which supports the Associative Big Data Index creating its indexes and other files,” he said.

Photo of Qlik and Podium execs by Shaun Sutner
Drew Clarke, Qlik senior vice president of strategy management (left), and Podium Data CEO Paul Barth at Qlik’s office in Newton, Mass.

How the sale came about

Barth said that after emerging as a startup more than four years ago, Podium was mulling another round of investment in January. The company started talking to investors and “strategic technology companies” and connected with Qlik, he added.

Podium fits into Qlik’s business strategy to provide data, a platform and analytics tools in the role of the data component, Barth said, “and we’re going to work with them on the platform piece to deploy this both on premises and in the cloud.”

For now, Podium is keeping its name, “but more information will be coming” about that within the year, including the possibility of a new name, Clarke said.

With Podium, Qlik broadens scope

Tableau released a data preparation tool for use with its BI software in April. But buying Podium enables Qlik to establish a complete data management and analytics platform in conjunction with its Qlik Sense software and improves the company’s ability to compete with Tableau, said Donald Farmer, principal of analytics consulting firm TreeHive Strategy and a former Qlik executive.

This is part of a trend of Qlik expanding their scope.
Donald Farmerprincipal, TreeHive Strategy

“This is a good acquisition for Qlik,” Farmer said. “In terms of competition, this more complete platform enables them to position effectively in a broader space than, say, Tableau.”

Farmer said the Qlik-Podium acquisition also makes Qlik more resemble companies in the enterprise BI space like Tibco and Microsoft that offer end-to-end self-service data management, including software for acquiring, cleansing and curating data, plus analytics and collaboration tools.

“Together with announcements that Qlik made at their Qonnections conference in May about machine learning and big data analytics, this is part of a trend of Qlik expanding their scope,” Farmer said.

Qlik gets data lake capabilities

Podium often is associated with managing data lake environments that feed big data applications, although it says its platform can handle all types of enterprise data. The Podium architecture is built on top of Hadoop, which Barth said makes the technology less expensive for enterprises running tens of thousands of processing jobs a night.

David Menninger, an analyst at Ventana Research, said he was surprised at the Qlik-Podium acquisition announcement.

In part, that’s “because Qlik has not been particularly strong in the data lake market because of their in-memory architecture, but Podium is largely focused on data lakes or big data implementations,” Menninger said.

Nonetheless, Menninger said he sees some positive potential for the deal for Qlik and its users.

“As analytics vendors add more data preparation capabilities, Podium Data’s capabilities can significantly enhance the value of data processed using Qlik,” he said.

News writer Mark Labbe contributed to this story.

Startup Arrcus aims NOS at Cisco, Juniper in the data center

Startup Arrcus has launched a highly scalable network operating system, or NOS, designed to compete with Cisco and Juniper Networks for the entire leaf-spine network in the data center.

Arrcus, based in San Jose, Calif., introduced its debut product, ArcOS, this week. Additionally, the startup announced $15 million in Series A funding from venture capital firms Clear Ventures and General Catalyst.

ArcOS, which has Debian Open Network Linux at its core, enters a crowded market of companies offering stand-alone network operating systems for switching and routing, as well as systems of integrated software and hardware. The latter is a strength of traditional networking companies, such as Cisco and Juniper.

While Arrcus has some catching up to do against its rivals, no company has taken a dominating share of the NOS-only market, said Shamus McGillicuddy, an analyst at Enterprise Management Associates (EMA), based in Boulder, Colo. The majority of vendors count customers in the tens or hundreds at most.

Many companies testing pure-open source operating systems today are likely candidates for commercial products. Vendors, however, must first prove the technology is reliable and fits the requirements of a corporate data center.

“Arrcus has a chance to capture a share of the data center operators that are still thinking about disaggregation,” McGillicuddy said. Disaggregation refers to the separation of the NOS from the underlying hardware.

ArcOS hardware support

Arrcus has built its own routing stack that is highly scalable, so it’s ideal for covering the entire leaf-and-spine network.
Shamus McGillicuddyanalyst, Enterprise Management Associates

Arrcus supports ArcOS on Broadcom chipsets and white box hardware from Celestica, Delta Electronics, Edgecore Networks and Quanta Computer. The approved chipsets are the StrataDNX Jericho+ and StrataXGS Trident 3, Trident 2 and Tomahawk. Architecturally, ArcOS can operate on other silicon and hardware, but the vendor does not support those configurations.

ArcOS is a mix of open source and proprietary software. The company, for example, uses its version of router protocols, including Border Gateway Protocol, Intermediate System to Intermediate System, Multiprotocol Label Switching and Open Shortest Path First.

“Arrcus has built its own routing stack that is highly scalable, so it’s ideal for covering the entire leaf-and-spine network,” McGillicuddy said. “The routing scalability also gives Arrcus the ability to do some sophisticated traffic engineering.”

The more sophisticated uses for ArcOS include internet peering for internet service providers and makers of content delivery networks, according to Devesh Garg, CEO at Arrcus. “We feel ArcOS can be used anywhere.”

ArcOS analytics

Arrcus is also providing analytics for monitoring and optimizing the performance and security of switching and routing, McGillicuddy said. The company has based its analytics on the control plane and data plane telemetry streamed from the NOS.

Because a lot of other NOS vendors lack analytics, “Arrcus is emerging with a more complete solution from an operational standpoint,” McGillicuddy said. According to EMA research, many enterprises want embedded analytics in their network infrastructure.

Today, a hardware-agnostic NOS is mostly used by the largest of financial institutions, cloud service providers, and internet and telecommunication companies, analysts said. Tackling networking through disaggregated hardware and software typically requires a level of IT sophistication not found in mainstream enterprises.

Nevertheless, companies find the concept attractive because of the promise of cheaper hardware, faster innovation and less dependence on a single vendor. As a result, the use of a stand-alone NOS is gaining some traction.

Last year, for example, Gartner included NOS makers in its Magic Quadrant for Data Center Networking for the first time. Big Switch Networks and Cumulus Networks met the criteria for inclusion in the “visionaries” quadrant, along with VMware and Dell EMC.

Missions acquisition will simplify Slack integrations

Slack plans to use the technology gained from its acquisition of Missions, a division of the startup Robots & Pencils, to make it easier for non-developers to customize workflows and integrations within its team collaboration app.

A Slack user with no coding knowledge can use Missions to build widgets for getting more work done within the Slack interface. For example, a human resources department could use a Missions widget to track and approve interviews with job applicants.

The Missions tool could also power an employee help desk system within Slack, or be used to create an onboarding bot that keeps new hires abreast of the documents they need to sign and the orientations they must attend. 

“In the same way that code libraries make it easier to program, Slack is trying to make workflows easier for everyone in the enterprise,” said Wayne Kurtzman, an analyst at IDC. “Without training, users will be able to create their own automated workflows and integrate with other applications.”

Slack said it would take a few months to add Missions to its platform. It will support existing Missions customers for free during that time. In a note to its 200,000 active developers, Slack said the Missions purchase would benefit them too, by making it easier to connect their Slack integrations to other apps.

Slack integrations help startup retain market leadership

The acquisition is Slack’s latest attempt to expand beyond its traditional base of software engineers and small teams. More than 8 million people in 500,000 organizations now use the platform, which was launched in 2013, and 3 million of those users have paid accounts.

With more than 1,500 third-party apps available in its directory, Slack has more outside developers than competitors such as Microsoft Teams and Cisco Webex Teams. The vendor has sought to capitalize on that advantage by making Slack integrations more useful.

Earlier this year, Slack introduced a shortcut that lets users send information from Slack to business platforms like Zendesk and HubSpot. Slack could be used to create a Zendesk ticket asking the IT department for a new desktop monitor, for example.

The automation of workflows, including through chatbots, is becoming increasingly important to enterprise technology buyers, according to Alan Lepofsky, an analyst at Constellation Research, based in Cupertino, Calif.

But it remains to be seen whether the average Slack user with no coding experience will take advantage of the Missions tool to build Slack integrations.

“I believe the hurdle in having regular knowledge workers create them is not skill, but rather even knowing that they can, or that they should,” Lepofsky said.

Qumulo storage parts the clouds with its K-Series active archive

Scale-out NAS startup Qumulo has added a dense cloud archiving appliance to help companies find hidden value in idle data.

Known as the K-Series, the product is an entry-level complement to Qumulo storage with C-Series hybrid and all-flash NVMe P-Series NAS primary arrays. The K-144T active archive target embeds the Qumulo File Fabric (QF2) scalable file system on a standard 1U server.

Qumulo, based in Seattle, didn’t disclose the source of the K-Series’ underlying hardware, but it has an OEM deal with Hewlett Packard Enterprise to package the Qumulo Scalable File System on HPE Apollo servers. Qumulo storage customers need a separate software subscription to add the K-Series archive to an existing Qumulo primary storage configuration.

“It’s routine for our customers to be storing billions of files, either tiny files generated by machines or large files generated by video,” Qumulo chief marketing officer Peter Zaballos said. “We now have a complete product line, from archiving up to blazing high performance.”

Analytics and cloud bursting

Customers can build a physical K-Series cluster with a minimum of six nodes and scale by adding single nodes. That allows them to replicate data from the K-Series target to an identical Qumulo storage cluster in AWS for analytics or cloud bursting. A cluster can scale to 1,000 nodes.

“There’s no need to pull data back from the cloud. You can do rendering against a tier of storage in the cloud and avoid [the expense] of data migration,” Qumulo product manager Jason Sturgeon said.

Each Qumulo storage K-Series node scales to 144 TB of raw storage. Each node accepts a dozen 12 TB HDDs for storage, plus three SSDs to capture read metadata. QumuloDB analytics collects the metadata information as the data gets written. A nine-node configuration provides 1 PB of usable storage.

Qumulo said it designed the K-Series arrays with an Intel Xeon D system-on-a-chip processor to reduce power consumption.

Exploding market for NFS, object archiving

Adding a nearline option to Qumulo storage addresses the rapid growth of unstructured data that requires file-based and object storage, said Scott Sinclair, a storage analyst at Enterprise Strategy Group.

“Qumulo is positioning the K-Series as a lower-cost, higher-density option for large-capacity environments,” Sinclair said. “There is a tremendous need for cheap and deep storage. Many cheap-and-deep workloads are using NFS protocols. This isn’t a file gateway that you retrofit on top of an object storage box. You can use normal file protocols.”

Those file protocols include NFS, SMB and REST-based APIs.

Sturgeon said the K-Series can ingest reads at 6 Gbps and writes at 3 Gpbs, per 1 PB of usable capacity.

To eliminate tree walks, the QF2 updates metadata of all files associated with a folder. Process checks occur every 15 seconds to provide visibility on the amount of data stored within the directory structure, allowing storage to be accessed and queried in nearly real time.

Qumulo has garnered more than $220 million in funding, including a $93 million Series D round earlier this month. Qumulo founders Peter Godman, Aaron Passey and Neal Fachan helped develop the Isilon OneFS clustered file system, leading the company to an IPO in 2006. EMC paid $2.25 billion to acquire the Isilon technology in 2010.

Godman is Qumulo CTO, and Fachan is chief scientist. Passey left in 2016 to take over as principal engineer at cloud hosting provider Dropbox.

Tableau acquisition of MIT AI startup aims at smarter BI software

Tableau Software has acquired AI startup Empirical Systems in a bid to give users of its self-service BI platform more insight into their data. The Tableau acquisition, announced today, adds an AI-driven engine that’s designed to automate the data modeling process without requiring the involvement of skilled statisticians.

Based in Cambridge, Mass., Empirical Systems started as a spinoff from the MIT Probabilistic Computing Project. The startup claims its analytics engine and data platform is able to automatically model data for analysis and then provide interactive and predictive insights into that data.

The technology is still in beta, and Francois Ajenstat, Tableau’s chief product officer, wouldn’t say how many customers are using it as part of the beta program. But he said the current use cases are broad and include companies in retail, manufacturing, healthcare and financial services. That wide applicability is part of the reason why the Tableau acquisition happened, he noted.

Catch-up effort with advanced technology

In some ways, however, the Tableau acquisition is a “catch-up play” on providing automated insight-generation capabilities, said Jen Underwood, founder of Impact Analytix LLC, a product research and consulting firm in Tampa. Some other BI and analytics vendors “already have some of this,” Underwood said, citing Datorama and Tibco as examples.

The Tableau acquisition adds an AI-driven engine that’s designed to automate the data modeling process without requiring the involvement of skilled statisticians.

Empirical’s automated modeling and statistical analysis tools could put Tableau ahead of its rivals, she said, but it’s too soon to tell without having more details on the integration plans. Nonetheless, she said she thinks the technology will be a useful addition for Tableau users.

“People will like it,” she said. “It will make advanced analytics easier for the masses.”

Tableau already has been investing in AI and machine learning technologies internally. In April, the company released its Tableau Prep data preparation software, with embedded fuzzy clustering algorithms that employ AI to help users group data sets together. Before that, Tableau last year released a recommendation engine that shows users recommended data sources for analytics applications. The feature is similar to how Netflix suggests movies and TV shows based on what a user has previously watched, Ajenstat explained.

Integration plans still unclear

Ajenstat wouldn’t comment on when the Tableau acquisition will result in Empirical’s software becoming available in Tableau’s platform, or whether customers will have to pay extra for the technology.

[embedded content]

Empirical CEO Richard Tibbetts on its automated data
modeling technology.

“Whether it’s an add-on or how it’s integrated, it’s too soon to talk about that,” he said.

However, he added that the Empirical engine will likely be “a foundational element” in Tableau, at least partially running behind the scenes, with a goal that “a lot of different things in Tableau will get smarter.”

Unlike some predictive algorithms that require large stores of data to function properly, Empirical’s software works with “data of all sizes, both large and small,” Ajenstat said. When integration does eventually begin to happen, Ajenstat said Tableau hopes to be able to better help users identify trends and outliers in data sets and point them toward factors they could drill into more quickly.

Augmented analytics trending

Tableau’s move around augmented analytics is in line with what Gartner pointed to as a key emerging technology in its 2018 Magic Quadrant report on BI and analytics platforms.

Various vendors are embedding machine learning tools into their software to aid with data preparation and modeling and with insight generation, according to Gartner. The consulting and market research firm said the augmented approach “has the potential to help users find the most important insights more quickly, particularly as data complexity grows.”

Such capabilities have yet to become mainstream product requirements for BI software buyers, Gartner said in the February 2018 report. But they are “a proof point for customers that vendors are innovating at a rapid pace,” it added.

The eight-person team from Empirical Systems will continue to work on the software after the Tableau acquisition. Tableau, which didn’t disclose the purchase price, also plans to create a research and development center in Cambridge.

Senior executive editor Craig Stedman contributed to this story.

Apstra bolsters IBN with customizable analytics

Startup Apstra has added to its intent-based networking software customizable analytics capable of spotting potential problems and reporting them to network managers.

Apstra introduced this week intent-based analytics as part of an upgrade to the company’s Apstra Operating System (AOS). The latest version, AOS 2.1, also includes other enhancements, such as support for additional network hardware and the ability to use a workload’s MAC or IP address to find it in an IP fabric.

In general, AOS is a network operating system designed to let managers automatically configure and troubleshoot switches. Apstra focuses on hardware transporting Layer 2 and Layer 3 traffic between devices from multiple vendors, including Arista Networks, Cisco, Dell and Juniper Networks. Apstra also supports white-box hardware running the Cumulus Networks OS.

AOS, which can run on a virtualized x86 server, communicates with the hardware through installed drivers or the hardware’s REST API. Data on the state of each device is continuously fed to the AOS data store. Alerts are sent to network operators when the state data conflicts with how a device is configured to operate.

AOS 2.1 takes the software’s capabilities up a notch through tools that operators can use to choose specific data they want the Apstra analytics engine to process.

“This is a logical progression for Apstra with AOS,” said Brad Casemore, an analyst at IDC. “Pervasive, real-time analytics should be an integral element of any intent-based networking system.”

Using Apstra analytics

The first step is for operators to define the type of data AOS will collect. For example, managers could ask for the CPU utilization on all spine switches. Also, they could request queries of all the counters for server-facing interfaces and of the routing tables for links connecting leaf and spine switches.

Mansour Karam, CEO, ApstraMansour Karam

“If you were to add a new link, add a new server, or add a new spine, the data would be included automatically and dynamically,” Apstra CEO Mansour Karam said.

Once the data is defined, operators can choose the conditions under which the software will examine the information. Apstra provides preset scenarios or operators can create their own. “You can build this [data] pipeline in the way that you want, and then put in rules [to extract intelligence],” Karam said.

Useful information that operators can extract from the system include:

  • traffic imbalances on connections between leaf and spine switches;
  • links reaching traffic capacity;
  • the distribution of north-south and east-west traffic; and
  • the available bandwidth between servers or switches.

Enterprises moving slowly with IBN deployments

Other vendors, such as Cisco, Forward Networks and Veriflow, are building out intent-based networking (IBN) systems to drive more extensive automation. Analytics plays a significant role in making automation possible.

“Nearly every enterprise that adopts advanced network analytics solutions

is using it to enable network automation,” said Shamus McGillicuddy, an analyst at Enterprise Management Associates, based in Boulder, Colo. “You can’t really have extensive network automation without analytics. Otherwise, you have no way to verify that what you are automating conforms with your intent.”

Today, most IT staffs use command-line interfaces (CLIs) to manually program switches and scores of other devices that comprise a network’s infrastructure. IBN abstracts configuration requirements from the CLI and lets operators use declarative statements within a graphical user interface to tell the network what they want. The system then makes the necessary changes.

The use of IBN is just beginning in the enterprise. Gartner predicts the number of commercial deployments will be in the hundreds through mid-2018, increasing to more than 1,000 by the end of next year.

Atomist extends CI/CD to automate the entire DevOps toolchain

Startup Atomist hopes to revolutionize development automation throughout the application lifecycle, before traditional application release automation vendors catch on.

Development automation has been the fleeting goal of a generation of tools, particularly DevOps tools, that promise continuous integration and continuous delivery. The latest is Atomist and its development automation platform, which aims to automate as many of the mundane tasks as possible in the DevOps toolchain.

Atomist ingests information about an organization’s software projects and processes to build a comprehensive understanding of those projects. Then it creates automations for the environment, which use programming tools such as parser generators and microgrammars to parse and contextualize code.

The system also correlates event streams pulled from various stages of development and represents them as code in a graph database known as the Cortex. Because Atomist’s founders said they believe the CI pipeline model falls short, Atomist takes an event-based approach to model everything in an organization’s software delivery process as a stream of events. The event-driven model also enables development teams to compose development flows based on events.

In addition, Atomist automatically creates Git repositories and configures systems for issue tracking and continuous integration, and creates chat channels to consolidate notifications on the project and delivered information to the right people.

“Atomist is an interesting and logical progression of DevOps toolchains, in that it can traverse events across a wide variety of platforms but present them in a fashion such that developers don’t need to context switch,” said Stephen O’Grady, principal analyst at RedMonk in Portland, Maine. “Given how many moving parts are involved in DevOps toolchains, the integrations are welcome.”

Mik Kersten, a leading DevOps guru and CEO at Tasktop Technologies, has tried Atomist firsthand and calls it a fundamentally new approach to manage delivery. As these become increasingly complex, the sources of waste move well beyond the code and into the tools spread across the delivery pipeline, Kersten noted.

The rise of microservices, and tens or hundreds of services in their environments, introduce trouble spots as developers collaborate, deploy and monitor the lifecycle of these hundreds of services, Johnson said.

This is particularly important for security, where keeping services consistent is paramount. In last year’s Equifax breach, hackers gained access through an unpatched version of Apache Struts — but with Atomist, an organization can identify and upgrade old software automatically across potentially hundreds of repositories, Johnson said.

Atomist represents a new class of DevOps product that goes beyond CI, which is “necessary, but not sufficient,” said Rod Johnson, Atomist CEO and creator of the Spring Framework.

Rod Johnson, CEO, AtomistRod Johnson

Tasktop’s Kersten agreed that approach to developer-centric automation “goes way beyond what we got with CI.” The company created a Slack bot that incorporates Atomist’s automation facilities, driven by a development automation engine that is reminiscent of model-driven development or aspect-oriented programming, but provides generative facilities not only of code but across projects resources and other tools, Kersten said. A notification system informs users what the automations are doing.

Most importantly, Atomist is fully extensible, and its entire internal data model can be exposed in GraphQL.

Tasktop has already explored ways to connect Atomist to Tasktop’s Integration Hub and the 58 Agile and DevOps tools it currently supports, Kersten said.

Automation built into development

As DevOps becomes more widely adopted, integrating automation into the entire DevOps toolchain is critical to help streamline the development process so programmers can develop faster, said Edwin Yuen, an analyst at Enterprise Strategy Group in Milford, Mass.

The market to integrate automation and development will grow, as both the companies that use DevOps and the number of applications they develop increase.
Edwin Yuenanalyst, Enterprise Strategy Group

“The market to integrate automation and development will grow, as both the companies that use DevOps and the number of applications they develop increase,” he said. Atomist’s integration in the code creation and deployment process, through release and update management processes, “enables automation not just in the development process but also in day two and beyond application management,” he said.”

Atomist joins other approaches such as GitOps and Bitbucket Pipelines that target the developer who chooses the tools used across the complete lifecycle, said Robert Stroud, an analyst at Forrester Research in Cambridge, Mass.

“Selection of tooling such as Atomist will drive developer productivity allowing them to focus on code, not pipeline development — this is good for DevOps adoption and acceleration,” he said. “The challenge for these tools is although new code fits well, deployment solutions are selected within enterprises by Ops teams, and also need to support on-premises deployment environments.”

For that reason, look for traditional application release automation vendors, such as IBM, XebiaLabs and CA Technologies, to deliver features similar to Atomist’s capabilities in 2018, Stroud said.

Startup Liqid looks to make a splash in composable storage

Hardware startup Liqid is set to unveil a PCIe-based fabric switch that fluidly configures bare-metal servers from pools of physical compute, flash storage, graphical processing units and network devices.

Backed by $19.5 million in venture funding, the Lafayette, Colo., vendor said Liqid Composable Infrastructure (Liqid CI) is scheduled for general availability by March. Liqid CI integrates the Liqid Grid PCIe 3.0 switch and Liqid Command Center orchestration software on standard servers.

Liqid — pronounced “liquid” — has partnerships with flash memory maker Kingston Technology and Phison Electronics Corp., a Taiwanese maker of NAND flash memory controllers. Both vendors are seed investors.

Liqid CI is designed to scale the provisioning of disaggregated computing devices on bare-metal using Peripheral Component Interconnect Express (PCIe).

The Liqid Grid fabric deploys compute, GPUs, network cards and Kingston SSDs on a shared PCIe bus. The programmable storage architecture allows data centers to dynamically compose a computer system on the fly from disaggregated devices. Liqid Command Center configures the individual components on demand as an application needs it.

The idea is to allow an application to consume only the resources it needs. Once the tasks are completed, the device is released back to the global resource pool for other jobs.

“If you need more storage, you don’t send somebody with a cart to plug in more storage. You reprogram the fabric to suck in more storage from the interconnected pools,” said Sumit Puri, a Liqid co-founder and vice president of marketing.

Liqid and Orange Silicon Valley — the global telecomm provider’s U.S. arm — last November displayed a prototype device that can provide on-demand GPU performance for high-performance computing.

Camberley Bates, a managing director at Boulder, Colo., company Evaluator Group, said Liqid CI provides the ability to flexibly add or subtract computing devices to boost performance or control costs.

“You’re using straightforward x86 CPUs and SSDs. Pull all the pieces together and off you go,” Bates said.

Hewlett Packard Enterprise is considered the leader in the emerging composable infrastructure market with its Ethernet-based Synergy virtualization hardware platform. Composable infrastructure signals where converged and hyper-converged markets are headed, Bates said. 

“There is too much hardening and not enough flexibility in a converged environment,” Bates said. “There are only a few vendors doing composable systems now, but over the long term, we think it has legs.”

The Liqid Grid PCIe 3.0 managed switch scales to connect thousands of devices. Physical hardware interconnections can be copper or photonics over MiniHD SAS cabling, with 24 ports and up to 96 PCIe lane. Each port is rated for full duplex bandwidth of 8 gigabits per second.

Puri said Liqid is seeking OEM partners to design Liqid CI rack-scale systems with qualified servers. The earliest to sign on is Inspur, which markets a Liqid CI-based platform to offer graphical processing units (GPUs) as a service to data centers running large AI application farms.

Customers also can purchase a developer’s kit directly from Liqid that comes as a 6U rack with two nodes, four 6.4 TB SSDs, two network interface cards and two GPU cards for about $30,000.

StorOne attacks bottlenecks with new TRU storage software

Startup StorOne this week officially launched its TRU multiprotocol software, which its founder claims will improve the efficiency of storage systems.

The Israel-based newcomer spent six years developing Total Resource Utilization (TRU) software with the goal of eliminating bottlenecks caused by software that cannot keep up with faster storage media and network connectivity.

StorOne developers collapsed the storage stack into a single layer that is designed to support block (Fibre Channel and iSCSI), file (NFS, SMB and CIFS) and object (Amazon Simple Storage Service) protocols on the same drives. The company claims to support enterprise storage features such as unlimited snapshots per volume, with no adverse impact to performance.

TRU software is designed to run on commodity hardware and support hard disk drives; faster solid-state drives (SSDs); and higher performance, latency-lowering NVMe-based PCI Express SSDs on the same server. The software installs either as a virtual machine or a physical server.

StorOne CEO and founder Gal Naor said the TRU software-defined storage fits use cases ranging from high-performance databases to low-performance workloads, such as backup and data archiving.

‘Dramatically less resources’

“We need dramatically less resources to achieve better results. Results are the key here,” said Naor, whose experience in storage efficiency goes back to his founding of real-time compression specialist Storwize, which IBM acquired in 2010.

StorOne CTO Raz Gordon said storage software has failed to keep up with the speed of today’s drives and storage networks.

“We understood that the software is the real bottleneck today of storage systems. It’s not the drives. It’s not the connectivity,” said Gordon, who was the leading force behind the Galileo networking technology that Marvell bought in 2001.

The StorOne leaders are sparse on details so far about the product’s architecture and enterprise capabilities, beyond unlimited storage snapshots.

Marc Staimer, senior analyst at Dragon Slayer Consulting, said StorOne’s competition would include any software-defined storage products that support block and file protocols, hyper-converged systems, and traditional unified storage systems.

“It’s a crowded field, but they’re the only ones attacking the efficiency issue today,” Staimer said.

“Because of TRU’s storage efficiency, it gets more performance out of fewer resources. Less hardware equals lowers costs for the storage system, supporting infrastructure, personnel, management, power and cooling, etc.,” Staimer added. “With unlimited budget, I can get unlimited performance. But nobody has unlimited budgets today.”

StorOne user interface
TRU user interface shows updated performance metrics for IOPS, latency, I/O size and throughput.

Collapsed storage stack

The StorOne executives said they rebuilt the storage software with new algorithms to address bottlenecks. They claim StorOne’s collapsed storage stack enables the fully rated IOPS and throughput of the latest high-performance SSDs at wire speed.

“The bottom line is the efficiency of the system that results in great savings to our customers,” Gordon said. “You end up with much less hardware and much greater performance.”

StorOne claimed a single TRU virtual appliance with four SSDs could deliver the performance of a midrange storage system, and an appliance with four NVMe-based PCIe SSDs could achieve the performance and low latency of a high-end storage system. The StorOne system can scale up to 18 GBps of throughput and 4 million IOPS with servers equipped with NVMe-based SSDs, according to Naor. He said the maximum capacity for the TRU system is 15 PB, but he provided no details on the server or drive hardware.

“It’s the same software that can be high-performance and high-capacity,” Naor said. “You can install it as an all-flash array. You can install it as a hybrid. And you’re getting unlimited snapshots.”

Naor said customers could choose the level of disk redundancy to protect data on a volume basis. Users can mix and match different types of drives, and there are no RAID restrictions, he said.

StorOne pricing

Pricing for the StorOne TRU software is based on physical storage consumption through a subscription license. A performance-focused installation of 150 TB would cost 1 cent per gigabyte, whereas a capacity-oriented deployment of 1 PB would be $0.0006 per gigabyte, according to the company. StorOne said pricing could drop to $0.002 per gigabyte with multi-petabyte installations. The TRU software license includes support for all storage protocols and features.

StorOne has an Early Adopters Program in which it supplies free on-site hardware of up to 1 PB.

StorOne is based in Tel Aviv and also has offices in Dallas, New York and Singapore. Investors include Seagate and venture capital firms Giza and Vaizra. StorOne’s board of directors includes current Microsoft chairman and former Symantec and Virtual Instruments CEO John Thompson, as well as Ed Zander, former Motorola CEO and Sun Microsystems president.