Tag Archives: Startup

New SoftIron management tool targets Ceph storage complexity

Startup SoftIron released a new HyperDrive Storage Manager tool that aims to make open source Ceph software-defined storage, and the hardware it runs on, easier to use.

London-based SoftIron designs, builds and assembles dedicated HyperDrive appliances for Ceph software-defined storage at its manufacturing facility in Newark, Calif. Now SoftIron has developed a tool to assist system administrators in managing the software and hardware in their Ceph storage clusters.

“We’re integrating it in the way that you would normally only see in a proprietary vendor’s storage,” said Andrew Moloney, SoftIron’s vice president of strategy.

Moloney said the new HyperDrive Storage Manager could automatically discover and deploy new nodes without the user having to resort to the command-line interface. If a drive goes down, the graphical user interface can pinpoint the physical location, and users can see a flashing light next to the drive in the appliance. HyperDrive Storage Manager also can lock out multiple administrators to prevent conflicting commands, Moloney said.

“Many of those things have not been addressed and can’t be addressed if you’re not looking at the hardware and the software as one entity,” Moloney said.

Enrico Signoretti, a research analyst at GigaOm, said one of the biggest problems with Ceph is complexity. The optimized SoftIron software/hardware stack and improved graphical user interface should help to lower the barrier for enterprises to adopt Ceph, Signoretti said.

SoftIron HyperDrive
SoftIron’s HyperDrive Storage Manager tool aims to ease the management of open source Ceph storage.

SoftIron started shipping its ARM-based HyperDrive appliances for Ceph about a year ago. Appliances are available in all-flash, all-disk and hybrid configurations. The most popular model is the 120 TB HyperDrive Density Storage Node with spinning disks and solid-state drives, according to Moloney. He said the average deployment is about 1 PB.

SoftIron has about 20 customers using HyperDrive in areas such as high-performance computing, analytics and data-intensive research projects. Customers include the University of Minnesota’s Supercomputing Institute, the University of Kentucky, national laboratories, government departments, and financial service firms, Moloney said.

SoftIron’s competition includes Ambedded Technology, a Taiwanese company that also makes an ARM-based Ceph Storage Appliance, as well as Red Hat and SUSE, which both offer supported versions of Ceph and tested third-party server hardware options.

Dennis Hahn, principal storage analyst at Omdia, said Red Hat and SUSE tend to focus on enterprise and traditional data centers, and SoftIron could find opportunities with smaller data centers and edge deployments for use cases such as retail, healthcare and industrial automation, with sensors gathering data.

Hahn said customers often look for lower-cost storage with edge use cases, and SoftIron’s HyperDrive appliances could play well there, with its AMD’s ARM processors that generally cost less than Intel’s x86.

Moloney said that Ceph can be “quite hardware sensitive” for anyone trying to get the best performance out of it. Citing an example, he said that SoftIron found it could optimize I/O and dramatically improve performance with an ARM64 processor by directly attaching all 14 storage drives. Moloney said that SoftIron also saw that using an SSD just for journaling and spinning media for storage could boost performance at the “right price point.”

Those who assume that software-defined data center technologies — whether storage, network or compute — can run great on “any kind of vanilla hardware” will be disappointed, Moloney said.

“In reality, there are big sacrifices that you make when you decide to do that, especially in open source, when you think about performance and efficiency and scalability,” Moloney said. “Our mission and our vision is about redefining that software-defined data center. The way we believe to do that is to run open source on what we call task-specific appliances.”

In addition HyperDrive storage, SoftIron plans to release a top-of-rack HyperSwitch, based on the open source SONiC network operating system, and a HyperCast transcoding appliance, using open source FFmpeg software for audio and video processing, within the next three months. Moloney said SoftIron is now “hitting the gas” and moving into an expansion phase since receiving $34 million in Series B funding in March, when he joined the company.

Go to Original Article
Author:

MemVerge pushes big memory computing for every app

Startup MemVerge is making its “Big Memory” software available in early access, allowing organizations to test-drive AI applications in fast, scalable main memory.

MemVerge’s Memory Machine software virtualizes underlying dynamic RAM (DRAM) and Intel Optane devices into large persistent memory lakes. It provides data services from memory, including replication, snapshots and tiered storage.

The vendor, based in Milpitas, Calif., also closed institutional funding of $19 million led by Intel Capital, with participation from Cisco Investments, NetApp and SK Hynix. That adds to $24.5 million MemVerge received from a consortium of venture funds at launch.

The arrival of Intel Optane memory devices drives moves storage class memory (SCM) from a fringe technology to one showing up in major enterprise storage arrays. Other than in-memory databases, most applications are not designed to run efficiently in volatile DRAM. The application code first needs to be rewritten for memory, which does not natively include data services or enable data sharing by multiple servers.

MemVerge Memory Machine at work

Memory Machine software will usher in big memory computing to serve legacy and modern applications and break memory-storage bottlenecks, MemVerge CEO Charles Fan said.

Charles FanCharles Fan

“MemVerge Memory Machine is doing to persistent memory what VMware vSphere did to CPUs,” he said.

Prior to launching MemVerge, Fan spent seven years as head of VMware’s storage business unit. He helped create VMware vSAN hyper-converged software. He previously started file virtualization vendor Rainfinity and sold it to EMC in 2005. MemVerge cofounder Shuki Bruck helped start XtremIO, an early all-flash storage array that now is part of Dell EMC’s midrange storage portfolio. Bruck was also a founder of Rainfinity.

MemVerge revised its product since emerging from stealth in 2019. The startup initially planned to virtualize memory and storage in Intel two-socket servers and scale up to 128 nodes. Fan said the company decided instead to offer Memory Machine solely as a software subscription for x86 servers. Financial services, AI, big data and machine learning are expected use cases.

MemVerge plans to introduce full storage services in 2021. That would allow programming of Intel Optane cards as low-latency block storage and tiering of data to back-end SAS SSDs.

“Our first step is to target in-memory applications and memory-intensive applications that have minimal access to storage. And in this case, we intercept all of the memory services and declare a value through the memory interface for existing applications,” Fan said.

Phase two of Memory Machine’s development will include a software development kit to program modern applications that require “zero I/O persistence,” Fan said.

The combination of Intel Optane with Memory Machine vastly increases the byte-addressable storage capacity of main memory, said Eric Burgener, a vice president of storage at IT analysis firm IDC.

“This is super interesting for AI, big data analytics, artificial intelligence and things like that, where you can load a large working set in main memory and run it much faster than [using] used block-addressable NVMe flash storage,” Burgener said.

“As long as you have a bunch of Optane cards and the MemVerge software layer running on the server, you can take any application and run it at memory speeds, without rewrites.”

Memory as storage: Gaining traction?

The MemVerge release underscores a flurry of new activity surrounding the use of persistent memory for disaggregated compute and storage.

Startup Alluxio in April reached a strategic partnership with Intel to implement its cloud orchestration file software with Intel Optane cards in Intel Xeon Scalable-powered servers. The combination allows disaggregated cloud storage to efficiently use file system semantics, as well as tap into DRAM or SSD media as buffer or page caches, said Alluxio CEO Haoyuan Li said.

Meanwhile, semiconductor maker Micron Technology — which partnered with Intel to initially develop the 3D XPoint media used in Optane devices — recently introduced an open source object storage engine geared for flash and persistent memory. Micron said Red Hat is among the partners helping to fine-tune Heterogeneous Memory Storage Engine for upstream inclusion in the Linux kernel.

Go to Original Article
Author:

Kite intros code completion for JavaScript developers

Kite, a software development tools startup specializing in AI and machine learning, has added code-completion capabilities for JavaScript developers.

San Francisco-based Kite’s AI-powered code completion technology to JavaScript initially targeted Python developers. JavaScript is arguably the most popular programming language and Kite’s move should be a welcome addition for JavaScript developers, as the technology can predict the next string of code they will write and complete it automatically.

“The use of AI is definitely making low-code even lower-code for sure, and no-code even more possible,” said Ronald Schmelzer, an analyst at Cognilytica in Ellicott City, Md. “AI systems are really good at determining patterns, so you can think of them as really advanced wizard or templating systems that can try to determine what you’re trying to do and suggest code or blocks or elements to complete your code.”

Kite’s Line-of-Code Completions feature uses advanced machine learning models to cut some of the mundane tasks that programmers perform to build applications, such as setting up build processes, searching for code snippets on Google, cutting and pasting boilerplate code from Stack Overflow, and repeatedly solving the same error messages, said Adam Smith, founder and CEO of Kite, in an interview.

Kite’s JavaScript code completions are currently available in private beta and can suggest code a developer has previously used or tap into patterns found in open source code files, Smith said. The deep learning models used to inform the Kite knowledge base have been trained on more than 22 million open source JavaScript files, he said.

Kite aims to advance the code-completion art

Unlike other code completion capabilities, Kite features layers of filtering such that only the most relevant completion results are returned, rather than a long list of completions ranked by probability, Smith said. Moreover, Kite’s completions work in .js, .jsx and .vue files and the system processes code locally on the user’s computer, rather than sending code to a cloud server for processing.

Ronald Schmelzer, analyst, CognilyticaRonald Schmelzer

Kite’s engineers took their time training the tool on the evergrowing JavaScript ecosystem and its frameworks, APIs and design patterns, Smith said. Thus, Kite works with popular JavaScript libraries and frameworks like React, Vue, Angular and Node.js. The system analyzes open source projects on GitHub and applies that data to machine learning models trained to predict the next word or words of code as programmers write in real time. This smarter programming environment makes it possible for developers to focus on what’s unique about their application.

There are other tools that offer code completion capabilities, such as the IntelliCode feature in the Microsoft Visual Studio IDE. IntelliCode provides more primitive code completion than Kite, Smith claimed. IntelliCode is the next generation of Microsoft’s older IntelliSense code completion technology. IntelliCode will predict the next word of code based on basic models, while Kite’s tool uses richer, more advanced deep learning models trained to predict further ahead to whole lines, and even multiple lines of code, Smith said.

AI systems are really good at determining patterns, so you can think of them as really advanced wizard or templating systems that can try to determine what you’re trying to do and suggest code or blocks or elements to complete your code.
Ronald SchmelzerAnalyst, Cognilytica

Moreover, Kite focuses on code completion, and not code correction, because programming code has to be exactly correct. For example, if you send someone a text with autocorrect errors, the tone of the message may still come across properly. But if you mistype a single letter of code, a program will not run.

Still, AI-powered code completion “Is still definitely a work in progress and much remains to be done, but OutSystems and others are also looking at AI-enabling their suites to deliver faster and more complete solutions in the low-code space,” Schmelzer said.

In addition to the new JavaScript code completion technology, Kite also introduced Kite Pro, the company’s first paid offering of code completions for Python powered by deep learning. Kite Pro provides features such as documentation in the Kite Copilot, which offers documentation for more than 800 Python libraries.

Kite works as a plugin for all of the most popular code editors, including Atom, JetBrains’ PyCharm/IntelliJ/WebStorm, Spyder, Sublime Text 3, VS Code and Vim. The product is available on Mac, Windows and Linux.

The basic version of Kite is free; however, Kite Pro costs $16.60 per user, per month. Custom team pricing also is available for teams that contact the company directly, Smith said.

Go to Original Article
Author:

Biometrics firm fights monitoring overload with log analytics

Log analytics tools with machine learning capabilities have helped one biometrics startup keep pace with increasingly complex application monitoring as it embraces continuous deployment and microservices.

BioCatch sought a new log analytics tool in late 2017. At the time, the Tel Aviv, Israel, firm employed a handful of workers and had just refactored a monolithic Windows application into microservices written in Python. The refactored app, which captures biometric data on how end users interact with web and mobile interfaces for fraud detection, required careful monitoring to ensure it still worked properly. Almost immediately after it completed the refactoring, BioCatch found the process had tripled the number of logs it shipped to a self-managed Elasticsearch repository.

“In the beginning, we had almost nothing,” said Tamir Amram, operations group lead for BioCatch, of the company’s early logging habits. “And, then, we started [having to ship] everything.”

The team found it could no longer manage its own Elasticsearch back end as that log data grew. Its IT infrastructure also mushroomed into 10 Kubernetes clusters distributed globally on Microsoft Azure. Each cluster hosts multiple sets of 20 microservices that provide multi-tenant security for each of its customers.

At that point, BioCatch had a bigger problem. It had to not only collect, but also analyze all its log data to determine the root cause of application issues. This became too complex to do manually. BioCatch turned to log analytics vendor Coralogix as a potential answer to the problem.

Log analytics tools flourish under microservices

Coralogix, founded in 2015, initially built its log management system on top of a hosted Elasticsearch service but couldn’t generate enough interest from customers.

“It did not go well,” Coralogix CEO Ariel Assaraf recalled of those early years for the business. “It was early in log analytics’ and log management’s appeal to the mainstream, and customers already had ‘good enough’ solutions.”

While the company still hosts Elasticsearch for its customers, based on the Amazon Open Distro for Elasticsearch, it refocused on log analytics, developed machine learning algorithms and monitoring dashboards, and relaunched in 2017.

That year coincided with the emergence of containers and microservices in enterprise IT shops as they sought to refactor monolithic applications with new design patterns. The timing proved fortuitous; since the Coralogix’s relaunch in 2017, it has gained more than 1,200 paying customers, according to Assaraf, at an average deal size of $50,000 a year.

Coralogix isn’t alone among DevOps monitoring vendors reaping the spoils of demand for microservices monitoring tools — not just in log analytics, but AI- and machine learning-driven infrastructure management, or AIOps, as well. These include application performance management (APM) vendors, such as New Relic, Datadog, AppDynamics and Dynatrace, along with Coralogix log analytics competitors Elastic Inc. and Splunk.

We were able to delegate log management to the support team, so the DevOps team wasn’t the only one owning and using logs.
Tamir AmramOperations group lead, BioCatch

In fact, analyst firm 451 Research predicted that the market for Kubernetes monitoring tools will dwarf the market for Kubernetes management products by 2022 as IT pros move from the initial phases of deploying microservices into “day two” management problems. Even more recently, log analytics tools have begun to play an increasing role in IT security operations and DevSecOps.

The newly relaunched Coralogix caught the eye of BioCatch in part because of its partnership with the firm’s preferred cloud vendor, Microsoft Azure. It was also easy to set up and redirect logs from the firm’s existing Elasticsearch instance, and the Coralogix-managed Elasticsearch service eliminated log management overhead for the BioCatch team.

“We were able to delegate log management to the support team, so the DevOps team wasn’t the only one owning and using logs,” Amram said. “Now, more than half of the company works with Coralogix, and more than 80% of those who work with it use it on a daily basis.”

Log analytics correlate app changes to errors

The BioCatch DevOps team adds tags to each application update that direct log data into Coralogix. Then, the software monitors application releases as they’re rolled out in a canary model for multiple tiers of customers. BioCatch rolls out its first application updates to what it calls “ring zero,” a group of early adopters; next, to “ring one;” and so on, according to each customer group’s appetite for risk. All those changes to multiple tiers and groups of microservices result in an average of 1.5 TB of logs shipped per day.

The version tags fed through the CI/CD pipeline to Coralogix enable the tool to identify issues and correlate them with application changes made by BioCatch developers. It also identifies anomalous patterns in infrastructure behavior post-release, which can catch problems that don’t appear immediately.

Coralogix log analytics
Coralogix log analytics uses version tags to correlate application issues with specific developer changes.

“Every so often, an issue will appear a day later because we usually release at off-peak times,” BioCatch’s Amram said. “For example, it can say, ‘sending items to this queue is 20 times slower than usual,’ which shows the developer why the queue is filling up too quickly and saturating the system.”

BioCatch uses Coralogix alongside APM tools from Datadog that analyze application telemetry and metrics. Often, alerts in Datadog prompt BioCatch IT ops pros to consult Coralogix log analytics dashboards. Datadog also began offering log analytics in 2018 but didn’t include this feature when BioCatch first began talks with Coralogix.

Coralogix also maintains its place at BioCatch because its interfaces are easy to work with for all members of the IT team, Amram said. This has grown to include not only developers and IT ops, but solutions engineers who use the tool to demonstrate to prospective customers how the firm does troubleshooting to maintain its service-level agreements.

“We don’t have to search in Kibana [Elasticsearch’s visualization layer] and say, ‘give me all the errors,'” Amram said. “Coralogix recognizes patterns, and if the pattern breaks, we get an alert and can immediately react.”

Go to Original Article
Author:

Startup Sisu’s data analytics tool aims to answer, ‘Why?’

Armed with $66.7 million in venture capital funding, startup vendor Sisu recently emerged from stealth and introduced the Sisu data analytics platform.

Sisu, founded in 2018 by Stanford professor Peter Bailis and based in San Francisco, revealed on Oct. 16 that it secured $52.5 million in Series B funding, led by New Enterprise Associates, a venture capital firm with more than $20 billion in assets under management. Previously, Sisu secured $14.2 million in funding, led by Andreessen Horowitz, which also participated in the Series B round.

On the same date it revealed the new infusion of capital, the startup rolled out the Sisu data analytics tool for general use, with electronics and IT giant Samsung already listed as one of its customers.

Essentially an automated system for monitoring changes in data sets, Sisu enters a competitive market featuring not only proven vendors but also recent startups such as ThoughtSpot and Looker, which have been able to differentiate themselves enough from other BI vendors to gain a foothold and survive — Looker agreed to be acquired by Google for $2.7 billion in June while ThoughtSpot remains independent.

“Startups have to stand out,” said Doug Henschen, an analyst at Constellation Research. “They can’t present me-too versions of capabilities that are already out there. They can’t be too broad and they also can’t expect companies to risk ripping out and replacing existing systems of mission-critical importance. The sweet spot is focused solutions that complement or extend existing capabilities or that take on new or emerging use cases or challenges.”

The Sisu data analytics platform is just that — highly focused — and not attempting to do anything other than track data.

A sample Sisu dashboard displays an organization's customer conversion rate data.
An organization’s customer conversion rate data is displayed on a Sisu dashboard.

It relies on machine learning and statistical analysis to monitor, recognize and explain changes to a given organization’s key performance indicators.

And it’s in that final stage — the explanation — where Sisu wants to differentiate from existing diagnostic tools. Others, according to Bailis, monitor data sets and are automated to send push notifications when changes happen, but don’t necessarily explain why those changes occurred.

Startups have to stand out. They can’t present me-too versions of capabilities that are already out there.
Doug HenschenAnalyst, Constellation Research

“We’re designed to answer one key question, and be the best at it,” said Bailis, who is on leave from Stanford. “We want to be faster, and we want to be better. There’s intense pressure to build everything into a platform, but I’m a firm believer that doing any one thing well is a company in itself. I’d rather be great at diagnosing than do a bunch of things just OK.”

The speed Bailis referred to comes from the architecture of the Sisu data analytics tool. Sisu is cloud native, which gives it more computing power than an on-premises platform, and its algorithms are built on augmented intelligence.

That speed is indeed a meaningful differentiator, according to Henschen.

“The sweet spot for Sisu is quickly diagnosing what’s changing in critical areas of a business and why,” he said. “It’s appealing to high-level business execs, not the analyst class or IT. The tech is compatible with, and doesn’t try to replace, existing investments in data visualization capabilities.”

Moving forward, Bailis said the Sisu data analytics platform will stay focused on data workflows, but that there’s room to grow even within that focused space.

“Long term, there is a really interesting opportunity for additional workflow operations,” he said. “There’s value because it leads to actions, and we want to own more and more of the action space. You can take action directly from the platform.”

Meanwhile, though survival is a challenge for any startup and many begin with the end goal of being acquired, Bailis said Sisu plans to take on the challenge of independence and compete against established vendors for market share. The recent funding, he said, will enable Sisu to continue to grow its capabilities to take advantage of what he sees as “an insane opportunity.”

Henschen, meanwhile, cautioned that unless Sisu does in fact grow its capabilities, it likely will be more of an acquisition target than a vendor with the potential for long-term survival.

“Sometimes startups come up with innovative technology, but [Sisu] strikes me as an IP [intellectual property] for a feature or set of features likely to be acquired by a larger, broader vendor,” he said. “That might be a good path for Sisu, but it’s early days for the company. I think it would have to evolve and develop broader capabilities in order to go [public] and continue as an independent company.”

Sisu is a Finnish word that translates loosely to tenacity or resilience, and is used by Finns to express their national character.

Go to Original Article
Author:

Amazon buys NVMe startup E8 Storage to boost public cloud

Another NVMe flash startup has been acquired — this time by a public cloud storage giant.

Amazon confirmed it will acquire E8 Storage and deploy its rack-scale flash storage in the Amazon Web Services (AWS) public cloud.

Amazon said the transaction includes “some assets” that include hiring the E8 Storage team. E8 Storage CEO Zivan Ori reportedly will join Amazon in an unspecified executive capacity.

Israeli news outlet Globes first reported the story, citing unnamed sources who estimated Amazon will pay between $50 million and $60 million to acquire E8 Storage. A separate report by Reuters said the purchase price is much less, citing another source with knowledge of the deal. Amazon did not publicly disclose the acquisition price.

Amazon’s move comes two weeks after its public cloud rival Google bought file storage software startup Elastifile and nearly one month after holding company StorCentric acquired NVMe array hopeful Vexata.

The Amazon-E8 Storage marriage signals growing interest in NVMe flash. There is widespread industry belief that the NVMe protocol will eventually replace traditional SCSI-based storage. SCSI traffic makes several network hops along the network. By contrast, NVMe allows applications to talk directly to storage across multilane PCIe devices.

For Amazon, the deal highlights the competition it faces from enterprises seeking an AWS-like alternative that costs less than AWS and is managed on premises. It will be worth watching to see if Amazon integrates E8 Storage gear with AWS Nitro compute instances, which use NVMe as the underlying media with Elastic Block Store.

By acquiring E8 Storage, Amazon gains a storage operating system optimized for NVMe flash, said Eric Burgener, a research vice president of storage at analyst firm IDC.

“E8 has an NVMe-over-TCP implementation integrated in its software. It’s not that Amazon couldn’t have built that, but E8 already built it and it works. TCP is clearly the future of NVMe-over-fabrics-attached storage. That’s where the volume is going to be,” Burgener said.

Ori and Alex Friedman founded E8 Storage in 2014. Both previously had worked in management positions at IBM Storage. Friedman was E8’s vice president of R&D. E8 Storage emerged from stealth in 2016, with a dense block-based array that combines 24 NVMe SSDs in a 2U standard form factor.

The E8 Storage software targets analytics and similarly data-intensive workloads that require extreme performance and ultralow latency. E8 received more than $18 million in total funding, including a $12 million Series B round in 2016.

In addition to E8 arrays, customers have also been able to buy E8 Storage software on reference architecture with servers by Dell, Hewlett Packard Enterprise and Lenovo. The vendor this year added parallel file storage to target high-performance computing.

E8 Storage was an early entrant in end-to-end NVMe flash. The E8 architecture is based on industry-standard TCP over IP. Other NVMe startups include Apeiron Data, Excelero and Pavilion Data Systems.

Burgener said he wouldn’t be surprised to see more consolidation in NVMe storage. After ceding ground early, Burgener said legacy storage vendors have aggressively pushed into NVMe.

“Most of the majors have gotten their marketing acts together around selling NVMe for mixed workload consolidation, but they also want to go after the same kind of dedicated workloads” first targeted by NVMe startups, Burgener said.

Go to Original Article
Author:

Google’s Elastifile buy shows need for cloud file storage

Google’s acquisition of startup Elastifile underscored the increasing importance of enterprise-class file storage in the public cloud.

Major cloud providers have long offered block storage for applications that customers run on their compute services and focused on scale-out object storage for the massively growing volumes of colder unstructured data. Now they’re also shoring up file storage as enterprises look to shift more workloads to the cloud.

Google disclosed its intention to purchase Elastifile for an undisclosed sum after collaborating with the startup on a fully managed file storage service that launched early in 2019 on its cloud platform. At the time, Elastifile’s CEO, Erwan Menard, positioned the service as a complement to the Google Cloud Filestore, saying his company’s technology would provide higher performance, scale-out capacity and enterprise-grade features than the Google option.

Integration plans

In a blog post on the acquisition, Google Cloud CEO Thomas Kurian said the teams would join together to integrate the Elastifile technology with Google Cloud Filestore. Kurian wrote that Elastifile’s pioneering software-defined approach would address the challenges of file storage for enterprise-grade applications running at scale in the cloud.

“Google now has the opportunity to create hybrid cloud file services to connect the growing unstructured data at the edge or core data centers to the public cloud for processing,” said Julia Palmer, a vice president at Gartner. She said Google could have needed considerably more time to develop and perfect a scale-out file system if not for the Elastifile acquisition.

Building an enterprise-level, high-performance NFS file system from scratch is “insanely difficult,” said Scott Sinclair, a senior analyst at Enterprise Strategy Group. He said Google had several months to “put Elastifile through its paces,” see that the technology looked good, and opt to buy rather than build the sort of file system that is “essential for the modern application environments that Google wants to sell into.”

Target workloads

Kurian cited examples of companies running SAP and developers building stateful container-based applications that require natively compatible file storage. He noted customers such as Appsbroker, eSilicon and Forbes that use the Elastifile Cloud File Service on Google Cloud Platform (GCP). In the case of eSilicon, the company bursts semiconductor design workflows to Google Cloud when it needs extra compute and storage capacity during peak times, Elastifile has said.

“The combination of Elastifile and Google Cloud will support bringing traditional workloads into GCP faster and simplify the management and scaling of data and compute intensive workloads,” Kurian wrote. “Furthermore, we believe this combination will empower businesses to build industry-specific, high performance applications that need petabyte-scale file storage more quickly and easily.”

Elastifile’s Israel-based engineering team spent four years developing the distributed Elastifile Cloud File System (ECFS). They designed ECFS for hybrid and public cloud use and banked on high-speed flash hardware to prevent metadata server bottlenecks and facilitate consistent performance.

Elastifile emerged from stealth in April 2017, claiming 25 customers, including 16 service providers. Target use cases it cited for ECFS included high-performance NAS, workload consolidation in virtualized environments, big data analytics, relational and NoSQL databases, high-performance computing, and the lift and shift of data and applications to the cloud. Elastifile raised $74 million over four funding rounds, including strategic investments from Dell Technologies, Cisco and Western Digital.

One open question is the degree to which Google will support Elastifile’s existing customers, especially those with hybrid cloud deployments that did not run on GCP. Both Google and Elastifile declined to respond.

Cloud NAS competition

The competitive landscape for the Elastifile Cloud File Service on GCP has included Amazon’s Elastic File System (EFS), Dell EMC’s Isilon on GCP, Microsoft’s Azure NetApp Files, and NetApp on GCP.

“Cloud NAS and cloud file systems are the last mile for cloud storage. Everybody does block. Everybody does object. NAS and file services were kind of an afterthought,” said Henry Baltazar, research director of storage at 451 Research.

But Baltazar said as more companies are thinking about moving their NFS-based legacy applications to the cloud, they don’t want to go through the pain and the cost of rewriting them for object storage or building a virtual file service. He sees Google’s acquisition of Elastifile as “a good sign for customers that more of these services will be available” for cloud NAS.

“Google doesn’t really make infrastructure acquisitions, so it says something that Google would make a deal like this,” Baltazar said. “It just shows that there’s a need.”

Go to Original Article
Author:

Ytica acquisition adds analytics to Twilio Flex cloud contact center

Twilio has acquired the startup Ytica and plans to embed its workforce optimization and analytics software into Twilio Flex, a cloud contact center platform set to launch later this year. Twilio will also sell Ytica’s products to competing contact center software vendors.

Twilio declined to disclose how much it paid for Ytica, but said the deal wouldn’t significantly affect its earnings in 2018. Twilio plans to open its 17th branch office in Prague, where Ytica is based.  

The acquisition comes as AI analytics has emerged as a differentiator in the expanding cloud contact center market and as Twilio — a leading provider of cloud-based communications tools for developers — prepares for the general release of its first prebuilt contact center platform, Twilio Flex.

Founded in 2017, Ytica sells a range of real-time analytics, reporting and performance management tools that contact center vendors can add to their platforms. In addition to Twilio, Ytica has partnerships with Talkdesk and Amazon Connect that are expected to continue.

Twilio is targeting Twilio Flex at large enterprises looking for the flexibility to customize their cloud contact centers. The platform launched in beta in March and is expected to be commercially released later this year.

The vendor’s communications platform as a service already supports hundreds of thousands of contact center agents globally. Twilio Flex places those same developer tools into the shell of a contact center dashboard preconfigured to support voice, text, video and social media channels.

The native integration of Ytica’s software should boost Twilio Flex’s appeal as businesses look for ways to save money and increase sales by automating the monitoring and management of contact center agents. 

Ytica’s portfolio includes speech analytics, call recording search, and real-time monitoring of calls and agent desktops. Businesses could use the technology to identify customer trends and to give feedback to agents.

Contact center vendors tout analytics in cloud

The marketing departments of leading contact center vendors have placed AI at the center of sales pitches this year, even though analysts say much of the technology is still in the early stages of usefulness.

This summer, Google unveiled an AI platform for building virtual agents and automating contact center analytics. Twilio was one of nearly a dozen vendors to partner with Google at launch, along with Cisco, Genesys, Mitel, Five9, RingCentral, Vonage, Appian and Upwire.

Within the past few months Avaya and Nice inContact have also updated their workforce optimization suites for contact centers with features including speech analytics and real-time trend reporting.

Enterprise technology buyers say analytics will be the most important technology for transforming customer experiences in the coming years, according to a recent survey of 700 IT and business leaders by Nemertes Research Group Inc., based in Mokena, Ill.

Qlik-Podium acquisition aims to boost BI data management

Qlik is buying startup Podium Data. The Qlik-Podium acquisition gives the self-service BI and data visualization software vendor new data management technology to boost its enterprise strategy and its ability to compete with archrival Tableau.

As part of the Qlik-Podium deal, Podium Data will move all 30 of its employees — including the co-founders and management team — from its Lowell, Mass., headquarters to Qlik’s regional office in Newton, Mass. Financial terms weren’t disclosed.

Podium will be a wholly owned subsidiary of Qlik and operate as a separate business unit, though with tighter connections to the Qlik platform to provide expanded BI data management capabilities, according to Drew Clarke, senior vice president of strategy management at Qlik.

Podium’s namesake technology, which automates data ingestion, validation, curation and preparation, will remain open and able to integrate with other vendors’ BI and analytics platforms, Podium CEO Paul Barth said.

Qlik on the rebound?

The Qlik-Podium purchase is part of Qlik’s effort to rebound from business problems that led to it being bought and taken by private equity firm Thoma Bravo in 2016. Multiple rounds of layoffs and a management change ensued, and Qlik lagged somewhat behind both Tableau and Microsoft Power BI in marketing and product development.

“It’s part of an acceleration of our vision,” Clarke said of the acquisition in a joint interview with Barth at the Newton office. “When we looked at what’s going on in big data in terms of volume of data and the management of that and making it accessible and analytics-ready, we felt that the Podium data solution was a great fit.”

While Clarke maintained that Qlik has been “enterprise-class” rather than department-oriented for some time, he also said Podium’s data management technology gives Qlik the ability to scale up and manage larger volumes of data.

Clarke said Podium’s technology is complementary to Qlik’s Associative Big Data Index, a system expected to be released later this year — that will index the data in Hadoop clusters and other big data platforms for faster access by Qlik users. “Podium can be used to prepare data files, which supports the Associative Big Data Index creating its indexes and other files,” he said.

Photo of Qlik and Podium execs by Shaun Sutner
Drew Clarke, Qlik senior vice president of strategy management (left), and Podium Data CEO Paul Barth at Qlik’s office in Newton, Mass.

How the sale came about

Barth said that after emerging as a startup more than four years ago, Podium was mulling another round of investment in January. The company started talking to investors and “strategic technology companies” and connected with Qlik, he added.

Podium fits into Qlik’s business strategy to provide data, a platform and analytics tools in the role of the data component, Barth said, “and we’re going to work with them on the platform piece to deploy this both on premises and in the cloud.”

For now, Podium is keeping its name, “but more information will be coming” about that within the year, including the possibility of a new name, Clarke said.

With Podium, Qlik broadens scope

Tableau released a data preparation tool for use with its BI software in April. But buying Podium enables Qlik to establish a complete data management and analytics platform in conjunction with its Qlik Sense software and improves the company’s ability to compete with Tableau, said Donald Farmer, principal of analytics consulting firm TreeHive Strategy and a former Qlik executive.

This is part of a trend of Qlik expanding their scope.
Donald Farmerprincipal, TreeHive Strategy

“This is a good acquisition for Qlik,” Farmer said. “In terms of competition, this more complete platform enables them to position effectively in a broader space than, say, Tableau.”

Farmer said the Qlik-Podium acquisition also makes Qlik more resemble companies in the enterprise BI space like Tibco and Microsoft that offer end-to-end self-service data management, including software for acquiring, cleansing and curating data, plus analytics and collaboration tools.

“Together with announcements that Qlik made at their Qonnections conference in May about machine learning and big data analytics, this is part of a trend of Qlik expanding their scope,” Farmer said.

Qlik gets data lake capabilities

Podium often is associated with managing data lake environments that feed big data applications, although it says its platform can handle all types of enterprise data. The Podium architecture is built on top of Hadoop, which Barth said makes the technology less expensive for enterprises running tens of thousands of processing jobs a night.

David Menninger, an analyst at Ventana Research, said he was surprised at the Qlik-Podium acquisition announcement.

In part, that’s “because Qlik has not been particularly strong in the data lake market because of their in-memory architecture, but Podium is largely focused on data lakes or big data implementations,” Menninger said.

Nonetheless, Menninger said he sees some positive potential for the deal for Qlik and its users.

“As analytics vendors add more data preparation capabilities, Podium Data’s capabilities can significantly enhance the value of data processed using Qlik,” he said.

News writer Mark Labbe contributed to this story.

Startup Arrcus aims NOS at Cisco, Juniper in the data center

Startup Arrcus has launched a highly scalable network operating system, or NOS, designed to compete with Cisco and Juniper Networks for the entire leaf-spine network in the data center.

Arrcus, based in San Jose, Calif., introduced its debut product, ArcOS, this week. Additionally, the startup announced $15 million in Series A funding from venture capital firms Clear Ventures and General Catalyst.

ArcOS, which has Debian Open Network Linux at its core, enters a crowded market of companies offering stand-alone network operating systems for switching and routing, as well as systems of integrated software and hardware. The latter is a strength of traditional networking companies, such as Cisco and Juniper.

While Arrcus has some catching up to do against its rivals, no company has taken a dominating share of the NOS-only market, said Shamus McGillicuddy, an analyst at Enterprise Management Associates (EMA), based in Boulder, Colo. The majority of vendors count customers in the tens or hundreds at most.

Many companies testing pure-open source operating systems today are likely candidates for commercial products. Vendors, however, must first prove the technology is reliable and fits the requirements of a corporate data center.

“Arrcus has a chance to capture a share of the data center operators that are still thinking about disaggregation,” McGillicuddy said. Disaggregation refers to the separation of the NOS from the underlying hardware.

ArcOS hardware support

Arrcus has built its own routing stack that is highly scalable, so it’s ideal for covering the entire leaf-and-spine network.
Shamus McGillicuddyanalyst, Enterprise Management Associates

Arrcus supports ArcOS on Broadcom chipsets and white box hardware from Celestica, Delta Electronics, Edgecore Networks and Quanta Computer. The approved chipsets are the StrataDNX Jericho+ and StrataXGS Trident 3, Trident 2 and Tomahawk. Architecturally, ArcOS can operate on other silicon and hardware, but the vendor does not support those configurations.

ArcOS is a mix of open source and proprietary software. The company, for example, uses its version of router protocols, including Border Gateway Protocol, Intermediate System to Intermediate System, Multiprotocol Label Switching and Open Shortest Path First.

“Arrcus has built its own routing stack that is highly scalable, so it’s ideal for covering the entire leaf-and-spine network,” McGillicuddy said. “The routing scalability also gives Arrcus the ability to do some sophisticated traffic engineering.”

The more sophisticated uses for ArcOS include internet peering for internet service providers and makers of content delivery networks, according to Devesh Garg, CEO at Arrcus. “We feel ArcOS can be used anywhere.”

ArcOS analytics

Arrcus is also providing analytics for monitoring and optimizing the performance and security of switching and routing, McGillicuddy said. The company has based its analytics on the control plane and data plane telemetry streamed from the NOS.

Because a lot of other NOS vendors lack analytics, “Arrcus is emerging with a more complete solution from an operational standpoint,” McGillicuddy said. According to EMA research, many enterprises want embedded analytics in their network infrastructure.

Today, a hardware-agnostic NOS is mostly used by the largest of financial institutions, cloud service providers, and internet and telecommunication companies, analysts said. Tackling networking through disaggregated hardware and software typically requires a level of IT sophistication not found in mainstream enterprises.

Nevertheless, companies find the concept attractive because of the promise of cheaper hardware, faster innovation and less dependence on a single vendor. As a result, the use of a stand-alone NOS is gaining some traction.

Last year, for example, Gartner included NOS makers in its Magic Quadrant for Data Center Networking for the first time. Big Switch Networks and Cumulus Networks met the criteria for inclusion in the “visionaries” quadrant, along with VMware and Dell EMC.