Tag Archives: Startup

Amazon buys NVMe startup E8 Storage to boost public cloud

Another NVMe flash startup has been acquired — this time by a public cloud storage giant.

Amazon confirmed it will acquire E8 Storage and deploy its rack-scale flash storage in the Amazon Web Services (AWS) public cloud.

Amazon said the transaction includes “some assets” that include hiring the E8 Storage team. E8 Storage CEO Zivan Ori reportedly will join Amazon in an unspecified executive capacity.

Israeli news outlet Globes first reported the story, citing unnamed sources who estimated Amazon will pay between $50 million and $60 million to acquire E8 Storage. A separate report by Reuters said the purchase price is much less, citing another source with knowledge of the deal. Amazon did not publicly disclose the acquisition price.

Amazon’s move comes two weeks after its public cloud rival Google bought file storage software startup Elastifile and nearly one month after holding company StorCentric acquired NVMe array hopeful Vexata.

The Amazon-E8 Storage marriage signals growing interest in NVMe flash. There is widespread industry belief that the NVMe protocol will eventually replace traditional SCSI-based storage. SCSI traffic makes several network hops along the network. By contrast, NVMe allows applications to talk directly to storage across multilane PCIe devices.

For Amazon, the deal highlights the competition it faces from enterprises seeking an AWS-like alternative that costs less than AWS and is managed on premises. It will be worth watching to see if Amazon integrates E8 Storage gear with AWS Nitro compute instances, which use NVMe as the underlying media with Elastic Block Store.

By acquiring E8 Storage, Amazon gains a storage operating system optimized for NVMe flash, said Eric Burgener, a research vice president of storage at analyst firm IDC.

“E8 has an NVMe-over-TCP implementation integrated in its software. It’s not that Amazon couldn’t have built that, but E8 already built it and it works. TCP is clearly the future of NVMe-over-fabrics-attached storage. That’s where the volume is going to be,” Burgener said.

Ori and Alex Friedman founded E8 Storage in 2014. Both previously had worked in management positions at IBM Storage. Friedman was E8’s vice president of R&D. E8 Storage emerged from stealth in 2016, with a dense block-based array that combines 24 NVMe SSDs in a 2U standard form factor.

The E8 Storage software targets analytics and similarly data-intensive workloads that require extreme performance and ultralow latency. E8 received more than $18 million in total funding, including a $12 million Series B round in 2016.

In addition to E8 arrays, customers have also been able to buy E8 Storage software on reference architecture with servers by Dell, Hewlett Packard Enterprise and Lenovo. The vendor this year added parallel file storage to target high-performance computing.

E8 Storage was an early entrant in end-to-end NVMe flash. The E8 architecture is based on industry-standard TCP over IP. Other NVMe startups include Apeiron Data, Excelero and Pavilion Data Systems.

Burgener said he wouldn’t be surprised to see more consolidation in NVMe storage. After ceding ground early, Burgener said legacy storage vendors have aggressively pushed into NVMe.

“Most of the majors have gotten their marketing acts together around selling NVMe for mixed workload consolidation, but they also want to go after the same kind of dedicated workloads” first targeted by NVMe startups, Burgener said.

Go to Original Article

Google’s Elastifile buy shows need for cloud file storage

Google’s acquisition of startup Elastifile underscored the increasing importance of enterprise-class file storage in the public cloud.

Major cloud providers have long offered block storage for applications that customers run on their compute services and focused on scale-out object storage for the massively growing volumes of colder unstructured data. Now they’re also shoring up file storage as enterprises look to shift more workloads to the cloud.

Google disclosed its intention to purchase Elastifile for an undisclosed sum after collaborating with the startup on a fully managed file storage service that launched early in 2019 on its cloud platform. At the time, Elastifile’s CEO, Erwan Menard, positioned the service as a complement to the Google Cloud Filestore, saying his company’s technology would provide higher performance, scale-out capacity and enterprise-grade features than the Google option.

Integration plans

In a blog post on the acquisition, Google Cloud CEO Thomas Kurian said the teams would join together to integrate the Elastifile technology with Google Cloud Filestore. Kurian wrote that Elastifile’s pioneering software-defined approach would address the challenges of file storage for enterprise-grade applications running at scale in the cloud.

“Google now has the opportunity to create hybrid cloud file services to connect the growing unstructured data at the edge or core data centers to the public cloud for processing,” said Julia Palmer, a vice president at Gartner. She said Google could have needed considerably more time to develop and perfect a scale-out file system if not for the Elastifile acquisition.

Building an enterprise-level, high-performance NFS file system from scratch is “insanely difficult,” said Scott Sinclair, a senior analyst at Enterprise Strategy Group. He said Google had several months to “put Elastifile through its paces,” see that the technology looked good, and opt to buy rather than build the sort of file system that is “essential for the modern application environments that Google wants to sell into.”

Target workloads

Kurian cited examples of companies running SAP and developers building stateful container-based applications that require natively compatible file storage. He noted customers such as Appsbroker, eSilicon and Forbes that use the Elastifile Cloud File Service on Google Cloud Platform (GCP). In the case of eSilicon, the company bursts semiconductor design workflows to Google Cloud when it needs extra compute and storage capacity during peak times, Elastifile has said.

“The combination of Elastifile and Google Cloud will support bringing traditional workloads into GCP faster and simplify the management and scaling of data and compute intensive workloads,” Kurian wrote. “Furthermore, we believe this combination will empower businesses to build industry-specific, high performance applications that need petabyte-scale file storage more quickly and easily.”

Elastifile’s Israel-based engineering team spent four years developing the distributed Elastifile Cloud File System (ECFS). They designed ECFS for hybrid and public cloud use and banked on high-speed flash hardware to prevent metadata server bottlenecks and facilitate consistent performance.

Elastifile emerged from stealth in April 2017, claiming 25 customers, including 16 service providers. Target use cases it cited for ECFS included high-performance NAS, workload consolidation in virtualized environments, big data analytics, relational and NoSQL databases, high-performance computing, and the lift and shift of data and applications to the cloud. Elastifile raised $74 million over four funding rounds, including strategic investments from Dell Technologies, Cisco and Western Digital.

One open question is the degree to which Google will support Elastifile’s existing customers, especially those with hybrid cloud deployments that did not run on GCP. Both Google and Elastifile declined to respond.

Cloud NAS competition

The competitive landscape for the Elastifile Cloud File Service on GCP has included Amazon’s Elastic File System (EFS), Dell EMC’s Isilon on GCP, Microsoft’s Azure NetApp Files, and NetApp on GCP.

“Cloud NAS and cloud file systems are the last mile for cloud storage. Everybody does block. Everybody does object. NAS and file services were kind of an afterthought,” said Henry Baltazar, research director of storage at 451 Research.

But Baltazar said as more companies are thinking about moving their NFS-based legacy applications to the cloud, they don’t want to go through the pain and the cost of rewriting them for object storage or building a virtual file service. He sees Google’s acquisition of Elastifile as “a good sign for customers that more of these services will be available” for cloud NAS.

“Google doesn’t really make infrastructure acquisitions, so it says something that Google would make a deal like this,” Baltazar said. “It just shows that there’s a need.”

Go to Original Article

Ytica acquisition adds analytics to Twilio Flex cloud contact center

Twilio has acquired the startup Ytica and plans to embed its workforce optimization and analytics software into Twilio Flex, a cloud contact center platform set to launch later this year. Twilio will also sell Ytica’s products to competing contact center software vendors.

Twilio declined to disclose how much it paid for Ytica, but said the deal wouldn’t significantly affect its earnings in 2018. Twilio plans to open its 17th branch office in Prague, where Ytica is based.  

The acquisition comes as AI analytics has emerged as a differentiator in the expanding cloud contact center market and as Twilio — a leading provider of cloud-based communications tools for developers — prepares for the general release of its first prebuilt contact center platform, Twilio Flex.

Founded in 2017, Ytica sells a range of real-time analytics, reporting and performance management tools that contact center vendors can add to their platforms. In addition to Twilio, Ytica has partnerships with Talkdesk and Amazon Connect that are expected to continue.

Twilio is targeting Twilio Flex at large enterprises looking for the flexibility to customize their cloud contact centers. The platform launched in beta in March and is expected to be commercially released later this year.

The vendor’s communications platform as a service already supports hundreds of thousands of contact center agents globally. Twilio Flex places those same developer tools into the shell of a contact center dashboard preconfigured to support voice, text, video and social media channels.

The native integration of Ytica’s software should boost Twilio Flex’s appeal as businesses look for ways to save money and increase sales by automating the monitoring and management of contact center agents. 

Ytica’s portfolio includes speech analytics, call recording search, and real-time monitoring of calls and agent desktops. Businesses could use the technology to identify customer trends and to give feedback to agents.

Contact center vendors tout analytics in cloud

The marketing departments of leading contact center vendors have placed AI at the center of sales pitches this year, even though analysts say much of the technology is still in the early stages of usefulness.

This summer, Google unveiled an AI platform for building virtual agents and automating contact center analytics. Twilio was one of nearly a dozen vendors to partner with Google at launch, along with Cisco, Genesys, Mitel, Five9, RingCentral, Vonage, Appian and Upwire.

Within the past few months Avaya and Nice inContact have also updated their workforce optimization suites for contact centers with features including speech analytics and real-time trend reporting.

Enterprise technology buyers say analytics will be the most important technology for transforming customer experiences in the coming years, according to a recent survey of 700 IT and business leaders by Nemertes Research Group Inc., based in Mokena, Ill.

Qlik-Podium acquisition aims to boost BI data management

Qlik is buying startup Podium Data. The Qlik-Podium acquisition gives the self-service BI and data visualization software vendor new data management technology to boost its enterprise strategy and its ability to compete with archrival Tableau.

As part of the Qlik-Podium deal, Podium Data will move all 30 of its employees — including the co-founders and management team — from its Lowell, Mass., headquarters to Qlik’s regional office in Newton, Mass. Financial terms weren’t disclosed.

Podium will be a wholly owned subsidiary of Qlik and operate as a separate business unit, though with tighter connections to the Qlik platform to provide expanded BI data management capabilities, according to Drew Clarke, senior vice president of strategy management at Qlik.

Podium’s namesake technology, which automates data ingestion, validation, curation and preparation, will remain open and able to integrate with other vendors’ BI and analytics platforms, Podium CEO Paul Barth said.

Qlik on the rebound?

The Qlik-Podium purchase is part of Qlik’s effort to rebound from business problems that led to it being bought and taken by private equity firm Thoma Bravo in 2016. Multiple rounds of layoffs and a management change ensued, and Qlik lagged somewhat behind both Tableau and Microsoft Power BI in marketing and product development.

“It’s part of an acceleration of our vision,” Clarke said of the acquisition in a joint interview with Barth at the Newton office. “When we looked at what’s going on in big data in terms of volume of data and the management of that and making it accessible and analytics-ready, we felt that the Podium data solution was a great fit.”

While Clarke maintained that Qlik has been “enterprise-class” rather than department-oriented for some time, he also said Podium’s data management technology gives Qlik the ability to scale up and manage larger volumes of data.

Clarke said Podium’s technology is complementary to Qlik’s Associative Big Data Index, a system expected to be released later this year — that will index the data in Hadoop clusters and other big data platforms for faster access by Qlik users. “Podium can be used to prepare data files, which supports the Associative Big Data Index creating its indexes and other files,” he said.

Photo of Qlik and Podium execs by Shaun Sutner
Drew Clarke, Qlik senior vice president of strategy management (left), and Podium Data CEO Paul Barth at Qlik’s office in Newton, Mass.

How the sale came about

Barth said that after emerging as a startup more than four years ago, Podium was mulling another round of investment in January. The company started talking to investors and “strategic technology companies” and connected with Qlik, he added.

Podium fits into Qlik’s business strategy to provide data, a platform and analytics tools in the role of the data component, Barth said, “and we’re going to work with them on the platform piece to deploy this both on premises and in the cloud.”

For now, Podium is keeping its name, “but more information will be coming” about that within the year, including the possibility of a new name, Clarke said.

With Podium, Qlik broadens scope

Tableau released a data preparation tool for use with its BI software in April. But buying Podium enables Qlik to establish a complete data management and analytics platform in conjunction with its Qlik Sense software and improves the company’s ability to compete with Tableau, said Donald Farmer, principal of analytics consulting firm TreeHive Strategy and a former Qlik executive.

This is part of a trend of Qlik expanding their scope.
Donald Farmerprincipal, TreeHive Strategy

“This is a good acquisition for Qlik,” Farmer said. “In terms of competition, this more complete platform enables them to position effectively in a broader space than, say, Tableau.”

Farmer said the Qlik-Podium acquisition also makes Qlik more resemble companies in the enterprise BI space like Tibco and Microsoft that offer end-to-end self-service data management, including software for acquiring, cleansing and curating data, plus analytics and collaboration tools.

“Together with announcements that Qlik made at their Qonnections conference in May about machine learning and big data analytics, this is part of a trend of Qlik expanding their scope,” Farmer said.

Qlik gets data lake capabilities

Podium often is associated with managing data lake environments that feed big data applications, although it says its platform can handle all types of enterprise data. The Podium architecture is built on top of Hadoop, which Barth said makes the technology less expensive for enterprises running tens of thousands of processing jobs a night.

David Menninger, an analyst at Ventana Research, said he was surprised at the Qlik-Podium acquisition announcement.

In part, that’s “because Qlik has not been particularly strong in the data lake market because of their in-memory architecture, but Podium is largely focused on data lakes or big data implementations,” Menninger said.

Nonetheless, Menninger said he sees some positive potential for the deal for Qlik and its users.

“As analytics vendors add more data preparation capabilities, Podium Data’s capabilities can significantly enhance the value of data processed using Qlik,” he said.

News writer Mark Labbe contributed to this story.

Startup Arrcus aims NOS at Cisco, Juniper in the data center

Startup Arrcus has launched a highly scalable network operating system, or NOS, designed to compete with Cisco and Juniper Networks for the entire leaf-spine network in the data center.

Arrcus, based in San Jose, Calif., introduced its debut product, ArcOS, this week. Additionally, the startup announced $15 million in Series A funding from venture capital firms Clear Ventures and General Catalyst.

ArcOS, which has Debian Open Network Linux at its core, enters a crowded market of companies offering stand-alone network operating systems for switching and routing, as well as systems of integrated software and hardware. The latter is a strength of traditional networking companies, such as Cisco and Juniper.

While Arrcus has some catching up to do against its rivals, no company has taken a dominating share of the NOS-only market, said Shamus McGillicuddy, an analyst at Enterprise Management Associates (EMA), based in Boulder, Colo. The majority of vendors count customers in the tens or hundreds at most.

Many companies testing pure-open source operating systems today are likely candidates for commercial products. Vendors, however, must first prove the technology is reliable and fits the requirements of a corporate data center.

“Arrcus has a chance to capture a share of the data center operators that are still thinking about disaggregation,” McGillicuddy said. Disaggregation refers to the separation of the NOS from the underlying hardware.

ArcOS hardware support

Arrcus has built its own routing stack that is highly scalable, so it’s ideal for covering the entire leaf-and-spine network.
Shamus McGillicuddyanalyst, Enterprise Management Associates

Arrcus supports ArcOS on Broadcom chipsets and white box hardware from Celestica, Delta Electronics, Edgecore Networks and Quanta Computer. The approved chipsets are the StrataDNX Jericho+ and StrataXGS Trident 3, Trident 2 and Tomahawk. Architecturally, ArcOS can operate on other silicon and hardware, but the vendor does not support those configurations.

ArcOS is a mix of open source and proprietary software. The company, for example, uses its version of router protocols, including Border Gateway Protocol, Intermediate System to Intermediate System, Multiprotocol Label Switching and Open Shortest Path First.

“Arrcus has built its own routing stack that is highly scalable, so it’s ideal for covering the entire leaf-and-spine network,” McGillicuddy said. “The routing scalability also gives Arrcus the ability to do some sophisticated traffic engineering.”

The more sophisticated uses for ArcOS include internet peering for internet service providers and makers of content delivery networks, according to Devesh Garg, CEO at Arrcus. “We feel ArcOS can be used anywhere.”

ArcOS analytics

Arrcus is also providing analytics for monitoring and optimizing the performance and security of switching and routing, McGillicuddy said. The company has based its analytics on the control plane and data plane telemetry streamed from the NOS.

Because a lot of other NOS vendors lack analytics, “Arrcus is emerging with a more complete solution from an operational standpoint,” McGillicuddy said. According to EMA research, many enterprises want embedded analytics in their network infrastructure.

Today, a hardware-agnostic NOS is mostly used by the largest of financial institutions, cloud service providers, and internet and telecommunication companies, analysts said. Tackling networking through disaggregated hardware and software typically requires a level of IT sophistication not found in mainstream enterprises.

Nevertheless, companies find the concept attractive because of the promise of cheaper hardware, faster innovation and less dependence on a single vendor. As a result, the use of a stand-alone NOS is gaining some traction.

Last year, for example, Gartner included NOS makers in its Magic Quadrant for Data Center Networking for the first time. Big Switch Networks and Cumulus Networks met the criteria for inclusion in the “visionaries” quadrant, along with VMware and Dell EMC.

Missions acquisition will simplify Slack integrations

Slack plans to use the technology gained from its acquisition of Missions, a division of the startup Robots & Pencils, to make it easier for non-developers to customize workflows and integrations within its team collaboration app.

A Slack user with no coding knowledge can use Missions to build widgets for getting more work done within the Slack interface. For example, a human resources department could use a Missions widget to track and approve interviews with job applicants.

The Missions tool could also power an employee help desk system within Slack, or be used to create an onboarding bot that keeps new hires abreast of the documents they need to sign and the orientations they must attend. 

“In the same way that code libraries make it easier to program, Slack is trying to make workflows easier for everyone in the enterprise,” said Wayne Kurtzman, an analyst at IDC. “Without training, users will be able to create their own automated workflows and integrate with other applications.”

Slack said it would take a few months to add Missions to its platform. It will support existing Missions customers for free during that time. In a note to its 200,000 active developers, Slack said the Missions purchase would benefit them too, by making it easier to connect their Slack integrations to other apps.

Slack integrations help startup retain market leadership

The acquisition is Slack’s latest attempt to expand beyond its traditional base of software engineers and small teams. More than 8 million people in 500,000 organizations now use the platform, which was launched in 2013, and 3 million of those users have paid accounts.

With more than 1,500 third-party apps available in its directory, Slack has more outside developers than competitors such as Microsoft Teams and Cisco Webex Teams. The vendor has sought to capitalize on that advantage by making Slack integrations more useful.

Earlier this year, Slack introduced a shortcut that lets users send information from Slack to business platforms like Zendesk and HubSpot. Slack could be used to create a Zendesk ticket asking the IT department for a new desktop monitor, for example.

The automation of workflows, including through chatbots, is becoming increasingly important to enterprise technology buyers, according to Alan Lepofsky, an analyst at Constellation Research, based in Cupertino, Calif.

But it remains to be seen whether the average Slack user with no coding experience will take advantage of the Missions tool to build Slack integrations.

“I believe the hurdle in having regular knowledge workers create them is not skill, but rather even knowing that they can, or that they should,” Lepofsky said.

Qumulo storage parts the clouds with its K-Series active archive

Scale-out NAS startup Qumulo has added a dense cloud archiving appliance to help companies find hidden value in idle data.

Known as the K-Series, the product is an entry-level complement to Qumulo storage with C-Series hybrid and all-flash NVMe P-Series NAS primary arrays. The K-144T active archive target embeds the Qumulo File Fabric (QF2) scalable file system on a standard 1U server.

Qumulo, based in Seattle, didn’t disclose the source of the K-Series’ underlying hardware, but it has an OEM deal with Hewlett Packard Enterprise to package the Qumulo Scalable File System on HPE Apollo servers. Qumulo storage customers need a separate software subscription to add the K-Series archive to an existing Qumulo primary storage configuration.

“It’s routine for our customers to be storing billions of files, either tiny files generated by machines or large files generated by video,” Qumulo chief marketing officer Peter Zaballos said. “We now have a complete product line, from archiving up to blazing high performance.”

Analytics and cloud bursting

Customers can build a physical K-Series cluster with a minimum of six nodes and scale by adding single nodes. That allows them to replicate data from the K-Series target to an identical Qumulo storage cluster in AWS for analytics or cloud bursting. A cluster can scale to 1,000 nodes.

“There’s no need to pull data back from the cloud. You can do rendering against a tier of storage in the cloud and avoid [the expense] of data migration,” Qumulo product manager Jason Sturgeon said.

Each Qumulo storage K-Series node scales to 144 TB of raw storage. Each node accepts a dozen 12 TB HDDs for storage, plus three SSDs to capture read metadata. QumuloDB analytics collects the metadata information as the data gets written. A nine-node configuration provides 1 PB of usable storage.

Qumulo said it designed the K-Series arrays with an Intel Xeon D system-on-a-chip processor to reduce power consumption.

Exploding market for NFS, object archiving

Adding a nearline option to Qumulo storage addresses the rapid growth of unstructured data that requires file-based and object storage, said Scott Sinclair, a storage analyst at Enterprise Strategy Group.

“Qumulo is positioning the K-Series as a lower-cost, higher-density option for large-capacity environments,” Sinclair said. “There is a tremendous need for cheap and deep storage. Many cheap-and-deep workloads are using NFS protocols. This isn’t a file gateway that you retrofit on top of an object storage box. You can use normal file protocols.”

Those file protocols include NFS, SMB and REST-based APIs.

Sturgeon said the K-Series can ingest reads at 6 Gbps and writes at 3 Gpbs, per 1 PB of usable capacity.

To eliminate tree walks, the QF2 updates metadata of all files associated with a folder. Process checks occur every 15 seconds to provide visibility on the amount of data stored within the directory structure, allowing storage to be accessed and queried in nearly real time.

Qumulo has garnered more than $220 million in funding, including a $93 million Series D round earlier this month. Qumulo founders Peter Godman, Aaron Passey and Neal Fachan helped develop the Isilon OneFS clustered file system, leading the company to an IPO in 2006. EMC paid $2.25 billion to acquire the Isilon technology in 2010.

Godman is Qumulo CTO, and Fachan is chief scientist. Passey left in 2016 to take over as principal engineer at cloud hosting provider Dropbox.

Tableau acquisition of MIT AI startup aims at smarter BI software

Tableau Software has acquired AI startup Empirical Systems in a bid to give users of its self-service BI platform more insight into their data. The Tableau acquisition, announced today, adds an AI-driven engine that’s designed to automate the data modeling process without requiring the involvement of skilled statisticians.

Based in Cambridge, Mass., Empirical Systems started as a spinoff from the MIT Probabilistic Computing Project. The startup claims its analytics engine and data platform is able to automatically model data for analysis and then provide interactive and predictive insights into that data.

The technology is still in beta, and Francois Ajenstat, Tableau’s chief product officer, wouldn’t say how many customers are using it as part of the beta program. But he said the current use cases are broad and include companies in retail, manufacturing, healthcare and financial services. That wide applicability is part of the reason why the Tableau acquisition happened, he noted.

Catch-up effort with advanced technology

In some ways, however, the Tableau acquisition is a “catch-up play” on providing automated insight-generation capabilities, said Jen Underwood, founder of Impact Analytix LLC, a product research and consulting firm in Tampa. Some other BI and analytics vendors “already have some of this,” Underwood said, citing Datorama and Tibco as examples.

The Tableau acquisition adds an AI-driven engine that’s designed to automate the data modeling process without requiring the involvement of skilled statisticians.

Empirical’s automated modeling and statistical analysis tools could put Tableau ahead of its rivals, she said, but it’s too soon to tell without having more details on the integration plans. Nonetheless, she said she thinks the technology will be a useful addition for Tableau users.

“People will like it,” she said. “It will make advanced analytics easier for the masses.”

Tableau already has been investing in AI and machine learning technologies internally. In April, the company released its Tableau Prep data preparation software, with embedded fuzzy clustering algorithms that employ AI to help users group data sets together. Before that, Tableau last year released a recommendation engine that shows users recommended data sources for analytics applications. The feature is similar to how Netflix suggests movies and TV shows based on what a user has previously watched, Ajenstat explained.

Integration plans still unclear

Ajenstat wouldn’t comment on when the Tableau acquisition will result in Empirical’s software becoming available in Tableau’s platform, or whether customers will have to pay extra for the technology.

[embedded content]

Empirical CEO Richard Tibbetts on its automated data
modeling technology.

“Whether it’s an add-on or how it’s integrated, it’s too soon to talk about that,” he said.

However, he added that the Empirical engine will likely be “a foundational element” in Tableau, at least partially running behind the scenes, with a goal that “a lot of different things in Tableau will get smarter.”

Unlike some predictive algorithms that require large stores of data to function properly, Empirical’s software works with “data of all sizes, both large and small,” Ajenstat said. When integration does eventually begin to happen, Ajenstat said Tableau hopes to be able to better help users identify trends and outliers in data sets and point them toward factors they could drill into more quickly.

Augmented analytics trending

Tableau’s move around augmented analytics is in line with what Gartner pointed to as a key emerging technology in its 2018 Magic Quadrant report on BI and analytics platforms.

Various vendors are embedding machine learning tools into their software to aid with data preparation and modeling and with insight generation, according to Gartner. The consulting and market research firm said the augmented approach “has the potential to help users find the most important insights more quickly, particularly as data complexity grows.”

Such capabilities have yet to become mainstream product requirements for BI software buyers, Gartner said in the February 2018 report. But they are “a proof point for customers that vendors are innovating at a rapid pace,” it added.

The eight-person team from Empirical Systems will continue to work on the software after the Tableau acquisition. Tableau, which didn’t disclose the purchase price, also plans to create a research and development center in Cambridge.

Senior executive editor Craig Stedman contributed to this story.

Apstra bolsters IBN with customizable analytics

Startup Apstra has added to its intent-based networking software customizable analytics capable of spotting potential problems and reporting them to network managers.

Apstra introduced this week intent-based analytics as part of an upgrade to the company’s Apstra Operating System (AOS). The latest version, AOS 2.1, also includes other enhancements, such as support for additional network hardware and the ability to use a workload’s MAC or IP address to find it in an IP fabric.

In general, AOS is a network operating system designed to let managers automatically configure and troubleshoot switches. Apstra focuses on hardware transporting Layer 2 and Layer 3 traffic between devices from multiple vendors, including Arista Networks, Cisco, Dell and Juniper Networks. Apstra also supports white-box hardware running the Cumulus Networks OS.

AOS, which can run on a virtualized x86 server, communicates with the hardware through installed drivers or the hardware’s REST API. Data on the state of each device is continuously fed to the AOS data store. Alerts are sent to network operators when the state data conflicts with how a device is configured to operate.

AOS 2.1 takes the software’s capabilities up a notch through tools that operators can use to choose specific data they want the Apstra analytics engine to process.

“This is a logical progression for Apstra with AOS,” said Brad Casemore, an analyst at IDC. “Pervasive, real-time analytics should be an integral element of any intent-based networking system.”

Using Apstra analytics

The first step is for operators to define the type of data AOS will collect. For example, managers could ask for the CPU utilization on all spine switches. Also, they could request queries of all the counters for server-facing interfaces and of the routing tables for links connecting leaf and spine switches.

Mansour Karam, CEO, ApstraMansour Karam

“If you were to add a new link, add a new server, or add a new spine, the data would be included automatically and dynamically,” Apstra CEO Mansour Karam said.

Once the data is defined, operators can choose the conditions under which the software will examine the information. Apstra provides preset scenarios or operators can create their own. “You can build this [data] pipeline in the way that you want, and then put in rules [to extract intelligence],” Karam said.

Useful information that operators can extract from the system include:

  • traffic imbalances on connections between leaf and spine switches;
  • links reaching traffic capacity;
  • the distribution of north-south and east-west traffic; and
  • the available bandwidth between servers or switches.

Enterprises moving slowly with IBN deployments

Other vendors, such as Cisco, Forward Networks and Veriflow, are building out intent-based networking (IBN) systems to drive more extensive automation. Analytics plays a significant role in making automation possible.

“Nearly every enterprise that adopts advanced network analytics solutions

is using it to enable network automation,” said Shamus McGillicuddy, an analyst at Enterprise Management Associates, based in Boulder, Colo. “You can’t really have extensive network automation without analytics. Otherwise, you have no way to verify that what you are automating conforms with your intent.”

Today, most IT staffs use command-line interfaces (CLIs) to manually program switches and scores of other devices that comprise a network’s infrastructure. IBN abstracts configuration requirements from the CLI and lets operators use declarative statements within a graphical user interface to tell the network what they want. The system then makes the necessary changes.

The use of IBN is just beginning in the enterprise. Gartner predicts the number of commercial deployments will be in the hundreds through mid-2018, increasing to more than 1,000 by the end of next year.

Atomist extends CI/CD to automate the entire DevOps toolchain

Startup Atomist hopes to revolutionize development automation throughout the application lifecycle, before traditional application release automation vendors catch on.

Development automation has been the fleeting goal of a generation of tools, particularly DevOps tools, that promise continuous integration and continuous delivery. The latest is Atomist and its development automation platform, which aims to automate as many of the mundane tasks as possible in the DevOps toolchain.

Atomist ingests information about an organization’s software projects and processes to build a comprehensive understanding of those projects. Then it creates automations for the environment, which use programming tools such as parser generators and microgrammars to parse and contextualize code.

The system also correlates event streams pulled from various stages of development and represents them as code in a graph database known as the Cortex. Because Atomist’s founders said they believe the CI pipeline model falls short, Atomist takes an event-based approach to model everything in an organization’s software delivery process as a stream of events. The event-driven model also enables development teams to compose development flows based on events.

In addition, Atomist automatically creates Git repositories and configures systems for issue tracking and continuous integration, and creates chat channels to consolidate notifications on the project and delivered information to the right people.

“Atomist is an interesting and logical progression of DevOps toolchains, in that it can traverse events across a wide variety of platforms but present them in a fashion such that developers don’t need to context switch,” said Stephen O’Grady, principal analyst at RedMonk in Portland, Maine. “Given how many moving parts are involved in DevOps toolchains, the integrations are welcome.”

Mik Kersten, a leading DevOps guru and CEO at Tasktop Technologies, has tried Atomist firsthand and calls it a fundamentally new approach to manage delivery. As these become increasingly complex, the sources of waste move well beyond the code and into the tools spread across the delivery pipeline, Kersten noted.

The rise of microservices, and tens or hundreds of services in their environments, introduce trouble spots as developers collaborate, deploy and monitor the lifecycle of these hundreds of services, Johnson said.

This is particularly important for security, where keeping services consistent is paramount. In last year’s Equifax breach, hackers gained access through an unpatched version of Apache Struts — but with Atomist, an organization can identify and upgrade old software automatically across potentially hundreds of repositories, Johnson said.

Atomist represents a new class of DevOps product that goes beyond CI, which is “necessary, but not sufficient,” said Rod Johnson, Atomist CEO and creator of the Spring Framework.

Rod Johnson, CEO, AtomistRod Johnson

Tasktop’s Kersten agreed that approach to developer-centric automation “goes way beyond what we got with CI.” The company created a Slack bot that incorporates Atomist’s automation facilities, driven by a development automation engine that is reminiscent of model-driven development or aspect-oriented programming, but provides generative facilities not only of code but across projects resources and other tools, Kersten said. A notification system informs users what the automations are doing.

Most importantly, Atomist is fully extensible, and its entire internal data model can be exposed in GraphQL.

Tasktop has already explored ways to connect Atomist to Tasktop’s Integration Hub and the 58 Agile and DevOps tools it currently supports, Kersten said.

Automation built into development

As DevOps becomes more widely adopted, integrating automation into the entire DevOps toolchain is critical to help streamline the development process so programmers can develop faster, said Edwin Yuen, an analyst at Enterprise Strategy Group in Milford, Mass.

The market to integrate automation and development will grow, as both the companies that use DevOps and the number of applications they develop increase.
Edwin Yuenanalyst, Enterprise Strategy Group

“The market to integrate automation and development will grow, as both the companies that use DevOps and the number of applications they develop increase,” he said. Atomist’s integration in the code creation and deployment process, through release and update management processes, “enables automation not just in the development process but also in day two and beyond application management,” he said.”

Atomist joins other approaches such as GitOps and Bitbucket Pipelines that target the developer who chooses the tools used across the complete lifecycle, said Robert Stroud, an analyst at Forrester Research in Cambridge, Mass.

“Selection of tooling such as Atomist will drive developer productivity allowing them to focus on code, not pipeline development — this is good for DevOps adoption and acceleration,” he said. “The challenge for these tools is although new code fits well, deployment solutions are selected within enterprises by Ops teams, and also need to support on-premises deployment environments.”

For that reason, look for traditional application release automation vendors, such as IBM, XebiaLabs and CA Technologies, to deliver features similar to Atomist’s capabilities in 2018, Stroud said.