Tag Archives: release

TigerGraph Cloud releases graph database as a service

With the general release of TigerGraph Cloud on Wednesday, TigerGraph introduced its first native graph database as a service.

In addition, the vendor announced that it secured $32 million in Series B funding, led by SIG.

TigerGraph, founded in 2012 and based in Redwood City, Ca., is a native graph database vendor whose products, first released in 2016, enable users to manage and access their data in different ways than traditional relational databases.

Graph databases simplify the connection of data points and enable them to simultaneously connect with more than one other data point. Among the benefits are the ability to significantly speed up the process of developing data into insights and to quickly pull data from disparate sources.

Before the release of TigerGraph Cloud, TigerGraph customers were able to take advantage of the power of graph databases, but they were largely on-premises users, and they had to do their own upgrades and oversee the management of the database themselves.

“The cloud makes life easier for everyone,” said Yu Xu, CEO of TigerGraph. “The cloud is the future, and more than half of database growth is coming from the cloud. Customers asked for this. We’ve been running [TigerGraph Cloud] in a preview for a while — we’ve gotten a lot of feedback from customers — and we’re big on the cloud. [Beta] customers have been using us in their own cloud.”

Regarding the servicing of the databases, Xu added: “Now we take over this control, now we host it, we manage it, we take care of the upgrades, we take care of the running operations. It’s the same database, but it’s an easy-to-use, fully SaaS model for our customers.”

In addition to providing graph database management as a service and enabling users to move their data management to the cloud, TigerGraph Cloud provides customers an easy entry into graph-based data analysis.

Some of the most well-known companies in the world, at their core, are built on graph databases.

Google, Facebook, LinkedIn and Twitter are all built on graph technology. Those vendors, however, have vast teams of software developers to build their own graph databases and teams of data scientists do their own graph-based data analysis, noted TigerGraph chief operating officer Todd Blaschka.

“That is where TigerGraph Cloud fits in,” Blaschka said. “[TigerGraph Cloud] is able to open it up to a broader adoption of business users so they don’t have to worry about the complexity underneath the hood in order to be able to mine the data and look for the patterns. We are providing a lot of this time-to-value out of the box.”

TigerGraph Cloud comes with 12 starter kits that help customers quickly build their applications. It also doesn’t require users to configure or manage servers, schedule monitoring or deal with potential security issues, according to TigerGraph.

That, according Donald Farmer, principal at TreeHive Strategy, is a differentiator for TigerGraph Cloud.

It is the simplicity of setting up a graph, using the starter kits, which is their great advantage. Classic graph database use cases such as fraud detection and recommendation systems should be much quicker to set up with a starter kit, therefore allowing non-specialists to get started.
Donald FarmerPrincipal, TreeHive Strategy

“It is the simplicity of setting up a graph, using the starter kits, which is their great advantage,” he said. “Classic graph database use cases such as fraud detection and recommendation systems should be much quicker to set up with a starter kit, therefore allowing non-specialists to get started.”

Graph databases, however, are not better for everyone and everything, according to Farmer. They are better than relational databases for specific applications, in particular those in which augmented intelligence and machine learning can quickly discern patterns and make recommendations. But they are not yet as strong as relational databases in other key areas.

“One area where they are not so good is data aggregation, which is of course a significant proportion of the work for business analytics,” Farmer said. “So relational databases — especially relational data warehouses — still have an advantage here.”

Despite drawbacks, the market for graph databases is expected to grow substantially over the next few years.

And much of that growth will be in the cloud, according to Blaschka.

Citing a report from Gartner, he said that 68% of graph database market growth will be in the cloud, while the graph database market as whole is forecast to have at least 100 percent year-over-year annual growth through 2022.

“The reason we’re seeing this growth so fast is that graph is the cornerstone for technologies such as machine learning, such as artificial intelligence, where you need large sets of data to find patterns to find insight that can drive those next-gen applications,” he said. “It’s really becoming a competitive advantage in the marketplace.”

With respect to the $32 million TigerGraph raised in Series B financing, according to Xu it will be used to help TigerGraph expand its reach into new markets and accelerate its emphasis on the cloud.

Go to Original Article

Dremio Data Lake Engine 4.0 accelerates query performance

Dremio is advancing its technology with a new release that supports AWS, Azure and hybrid cloud deployments, providing what the vendor refers to as a Data Lake Engine.

The Dremio Data Lake Engine 4.0 platform is rooted in multiple open source projects, including Apache Arrow, and offers the promise of accelerated query performance for data lake storage.

Dremio made the platform generally available on Sept. 17. The Dremio Data Lake Engine 4.0 update introduces a feature called column-aware predictive pipelining that helps predict access patterns, which makes queries faster. The new Columnar Cloud Cache (C3) feature in Dremio also boosts performance by caching data closer to where compute execution occurs.

For IDC analyst Stewart Bond, the big shift in the Dremio 4.0 update is how the data lake engine vendor has defined its offering as a “Data Lake Engine” focused on AWS and Azure.

In some ways, Dremio had previously struggled to define what its technology actually does, Bond said. In the past, Dremio had been considered a data preparation tool, a data virtualization tool and even a data integration tool, he said. It does all those things, but in ways, and with data, that differ markedly from traditional technologies in the data integration software market.

“Dremio offers a semantic layer, query and acceleration engine over top of object store data in AWS S3 or Azure, plus it can also integrate with more traditional relational database technologies,” Bond said. “This negates the need to move data out of object stores and into a data warehouse to do analytics and reporting.”

For data in a data lake to be valuable, it typically needs to be extracted, refined and delivered to data warehouses, analytics, machine learning, or operational applications where it can also be transformed into something different when blended with other data ingredients.
Stewart BondAnalyst, IDC

Simply having a data lake doesn’t do much for an organization. A data lake is just data, and just as with natural lakes, water needs to be extracted, refined and delivered for consumption, Bond said.

“For data in a data lake to be valuable, it typically needs to be extracted, refined and delivered to data warehouses, analytics, machine learning or operational applications where it can also be transformed into something different when blended with other data ingredients,” Bond said. “Dremio provides organizations with the opportunity to get value out of data in a data lake without having to move the data into another repository, and can offer the ability to blend it with data from other sources for new insights.”

How Dremio Data Lake Engine 4.0 works

Organizations use technologies like ETL (extract, transform, load), among other things, to move data from data lake storage into a data warehouse because they can’t query the data fast enough where it is, said Tomer Shiran, co-founder and CTO of Dremio. That performance challenge is one of the drivers behind the C3 feature in Dremio 4.

“With C3 what we’ve developed is a patent pending real-time distributed cache that takes advantage of the NVMe devices that are on the instances that we’re running on to automatically cache data from S3,” Shiran explained. “So when the query engine is accessing a piece of data for the second time, it’s at least 10 times faster than getting it directly from S3.”

Screenshot of Dremio data lake architecture
Dremio data lake architecture

The new column-aware predictive pipelining feature in Dremio Data Lake Engine 4.0 further accelerates query performance for the initial access. The features increases data read throughput to the maximum that is allowed on a given network, Shiran explained.

While Dremio is positioning its technology as a data lake engine that can be used to query data stored in a data lake, Shiran noted that the platform also has data virtualization capabilities. With data virtualization, pointers or links to sources of data enables creating a logical data layer.

Apache Arrow

One of the foundational technologies that enables the Dremio Data Lake Engine is the open source Apache Arrow project, which Shiran helped to create.

“We took the internal memory format of Dremio, and we open sourced that as Apache Arrow, with the idea that we wanted our memory format to be an industry standard,” Shiran said.

Arrow has become increasingly popular over the past three years and is now used by many different tools, including Apache Spark.

With the growing use of Arrow, Dremio’s goal is to make communications between its platform and other tools that use Arrow as fast as possible. Among the ways that Dremio is helping to make Arrow faster is with the Gandiva effort that is now built into Dremio 4, according to the vendor. Gandiva is an execution kernel that is based on the LLVM compiler, enabling real-time code compilation to accelerate queries.

Dremio will continue to work on improving performance, Shiran said.

“At the end of the day, customers want to see more and more performance, and more data sources,” he said. “We’re also making it more self-service for users, so for us we’re always looking to reduce friction and the barriers.”

Go to Original Article

IBM Storage syncs new DS8900F array to z15 mainframe launch

IBM Storage launched new faster all-flash, standard-rack-sized DS8900F arrays to coincide with the release of its new z15 mainframe.

The DS8900F models use IBM’s latest Power Systems Power 9 processors and an optimized software stack to boost performance over their Power 8-based DS8880 predecessors. IBM claimed users will see lower latency (from 20 microseconds to 18 μs), improved IOPS and twice the bandwidth when using the DS8900F arrays connected to z15 mainframes equipped with zHyperLink I/O adapter cards compared to using the DS8800.

IBM storage customers will note similar performance improvements when they use the DS8900F arrays with z14 mainframes that have zHyperLink cards. Those that use older z13 mainframes without the zHyperLink cards will see response time drop from 120 μs to 90 μs, IBM claims.

IDC research vice president Eric Burgener said IBM mainframe customers who use a FICON host connection and zHyperLink cards could see latency that’s lower than what any other storage array in the industry can deliver, outside of host-side storage using persistent memory, such as NetApp’s MAX Data.

New IBM storage array is flash only

The prior DS8880 family included all-flash, all-disk and hybrid options mixing disk and solid-state drives (SSDs). But the new DS8900F array that IBM plans to ship next month will use only flash-based SSDs. The maximum capacities are 5.9 PB for the DS8950F model and 2.9 PB for the DS8910F when configured with 15.36 TB flash drives.

Another difference between the DS8900F and its DS8880 predecessor is availability. The DS8900F offers seven 9s (99.99999% availability) compared to DS8880’s six 9s (99.9999% availability). Eric Herzog, CMO and vice president of storage channels at IBM, said seven 9s would translate to 3.1 seconds of statistical downtime with round-the-clock operation over the course of a year.

“From five-plus minutes to less than four seconds doesn’t sound like much on an annual basis, but it really is,” said David Hill, founder and principal analyst at Mesabi Group. “It greatly decreases the chances that a system will ever go down in a year, and that is not a bad thing.”

Although the availability boost may be important for some customers, IBM partner Truenorth has found that six 9s is more than enough for its clients’ needs, according to Truenorth software principal Tomas Perez.  

Perez said a more important new feature is the industry-standard rack size that will make the DS8910F homogeneous with other equipment. That should be helpful as Puerto Rico-based Truenorth puts together disaster recovery (DR) systems for its customers. Truenorth’s clients, including Puerto Rico’s treasury department, have focused on DR in the wake of Hurricane Maria.

The new DS8900F arrays conform to industry-standard rack dimensions of 19 inches wide, 36 inches deep and 42 inches tall. Most IBM DS8880 models have a standard width of 19 inches but non-standard depth, at 54 inches, and height, at 40 inches, with expansion options to 46. In 2018, IBM added a standard-sized DS8882F model to fit into new standard-sized z14 Model ZR1 and LinuxOne Rockhopper II mainframe models released in 2018.

IBM storage security enhancements

With the latest systems, IBM is adding the ability to encrypt data in flight between the new z15 mainframe and the DS8900F array. IBM supported only data-at-rest encryption in prior models.

Herzog said the hardware-based data encryption would not affect performance because the system uses an encryption coprocessor. Prior models use the main CPU for hardware-based encryption, so there could be a performance impact depending on the configuration or workload, Herzog said.

Endpoint security is another new capability that IBM is adding with its Z, LinuxOne and DS8900F systems. Herzog described the new functionality as a “custom handshake” to ensure that the array and the Z system know they’re talking to each other, rather than any spoofed system. 

The DS8900F will also support previously available IBM capabilities. Safeguarded Copy enables up to 500 immutable, point-in-time snapshots of data for protection against malware and ransomware attacks. IBM Storage Insights’ predictive analytics assists with capacity and performance management. And IBM’s Transparent Cloud Tiering supports protecting and archiving encrypted block-based data to S3-based object storage, from providers such as AWS and IBM Cloud, without a separate gateway.

Besides supporting IBM Z mainframes, the DS8900F also works with non-mainframes such as Unix-, Linux-, Windows- and VMware-based systems. The new z15 mainframe is due to become generally available next week, and the DS8200F storage array will follow in October. The starting price is $134,000 for the DS8910F and $196,000 for the DS8950F, according to IBM.

IDC’s Burgener said IBM’s DS competitors, Dell EMC and Hitachi Vantara, generally support distributed systems before adding support for mainframes six to 12 months later. He said IBM’s DS arrays, by contrast, always support mainframes on day one. IBM owns 40% to 50% of the mainframe-attached storage market, Burgener said.

“We should see a noticeable bump in IBM’s overall storage revenues over the next 12 months as their customers go through refresh cycles, and that bump may be a bit higher, if for no other reason than the fact that they are including these new arrays on every new mainframe quote,” Burgener said.

Go to Original Article

Magento BI update a benefit to vendor’s e-commerce customers

With the rollout of the Magento Business Intelligence Summer 2019 Release on Thursday, the Magento BI platform will get improved scheduling capabilities along with a host of new dashboard visualizations.

Magento, founded in 2008 and based in Culver City, Calif., is primarily known for its e-commerce platform. In 2018 the vendor was acquired by Adobe for $1.7 billion and is now part of the Adobe Experience Cloud.

With the vendor’s focus on e-commerce, the Magento BI platform isn’t designed to compete as a standalone tool against the likes of Microsoft Power BI, Qlik, Tableau and other leading BI vendors. Instead, it’s designed to slot in with Magento’s e-commerce platform and is intended for existing Magento customers.

“I love the BI angle Magento is taking here,” said Mike Leone, a senior analyst at Enterprise Strategy Group. “I would argue that many folks that utilize their commerce product are by no means experts at analytics. Magento will continue to empower them to gain more data-driven insights in an easy and seamless way. It is enabling businesses to take the next step into data-driven decision making without adding complexity.”

Similarly, Nicole France, principal analyst at Constellation Research, noted the importance of enhancing the BI capabilities of Magento’s commerce customers.

“This kind of reporting on commerce systems is undoubtedly useful,” she said. “The idea here seems to be reaching a wider audience than the folks directly responsible for running commerce. That means putting the right data in the appropriate context.”

The updated Magento BI platform comes with 13 data visualization templates, now including bubble charts, and over 100 reports.

Bubble charts such as this sample showing an organization's customer breakdown by state are now part of Magento's business intelligence platform.
Asample bubble chart from Magento shows an organization’s customer breakdown by state.

In addition, it comes with enhanced sharing capabilities. Via email, users can schedule reports to go out to selected recipients on a one-time basis or any repeating schedule they want. They can also keep track of the relevancy of the data with time logs and take all necessary actions from a status page.

“It finds the insights merchants want,” said Daniel Rios, product manager at Adobe. “It brings BI capabilities to merchants.”

Matthew Wasley, product marketing manager at Adobe, added: “Now there’s a better way to share insights that goes right to the inbox of a colleague and is part of their daily workflow.

“They can see the things they need to see — it bridges the gap,” Wasley said. “It’s an email you actually want to open.”

According Wasley, the Magento BI platform provides a full end-to-end data stack that services customers from the data pipeline through the data warehouse and ultimately to the dashboard visualization layer.

While some BI vendors offer products with similar end-to-end capabilities, others offer only one layer and need to be paired with other products to help a business client take data from its raw form and develop it into a digestible form.

“We’re ahead of the curve with Magento,” Wasley said.

He added that the end-to-end capability of the Magento BI tool is something other vendors are trying to put together through acquisitions. Though he didn’t name any companies specifically, Google with its purchase of Looker and Salesforce with its acquisition of Tableau are two that fit the mold.

We see our BI as a differentiator for our commerce platform. Standalone BI is evolving in itself. It’s tailored, and differentiates our commerce product.
Matthew WasleyProduct marketing manager, Adobe

Still, the Magento BI tool isn’t designed to compete on the open market against vendors who specialize in analytics platforms.

“We see our BI as a differentiator for our commerce platform,” said Wasley. “Standalone BI is evolving in itself. It’s tailored, and differentiates our commerce product.”

Moving forward, like the BI tools offered by other vendors, the Magento BI platform will become infused with more augmented intelligence and machine learning capabilities with innovation enhanced by Magento’s envelopment into the Adobe universe.

“We’re seeing how important data is across Adobe,” said Wasley. “All together, it’s meant to … make better use of data. Because of the importance of data across Adobe, we’re able to innovate a lot faster over the next 6 – 12 months.”

And presumably, that means further enhancement the Magento BI platform for the benefit of the vendor’s e-commerce customers.

Go to Original Article

Microsoft patches two Windows zero-days in July Patch Tuesday

The July 2019 Patch Tuesday release included fixes for 77 vulnerabilities, two of which were Windows zero-days that were actively exploited in the wild.

The two Windows zero-days are both local escalation-of-privilege flaws that cannot be used alone to perform an attack. One zero-day, CVE-2019-0880, is a flaw in how splwow64.exe handles certain calls. The issue affects Windows 8.1, Windows 10 and Windows Server 2012, 2016 and 2019.

“This vulnerability by itself does not allow arbitrary code execution; however, it could allow arbitrary code to be run if the attacker uses it in combination with another vulnerability that is capable of leveraging the elevated privileges when code execution is attempted,” according to Microsoft.

The other Windows zero-day the vendor patched was CVE-2019-1132, which caused the Win32k component to improperly handle objects in memory. This issue affects Windows 7 and Windows Server 2008.

“To exploit this vulnerability, an attacker would first have to log on to the system,” Microsoft noted. “An attacker could then run a specially crafted application that could exploit the vulnerability and take control of an affected system.”

This zero-day was reported to Microsoft by ESET. Anton Cherepanov, senior malware researcher for ESET, detailed a highly targeted attack in Eastern Europe and recommended upgrading systems as the best remediation against attacks.

“The exploit only works against older versions of Windows, because since Windows 8 a user process is not allowed to map the NULL page. Microsoft back-ported this mitigation to Windows 7 for x64-based systems,” Cherepanov wrote in a blog post. “People who still use Windows 7 for 32-bit systems Service Pack 1 should consider updating to newer operating systems, since extended support of Windows 7 Service Pack 1 ends on January 14th, 2020. Which means that Windows 7 users won’t receive critical security updates. Thus, vulnerabilities like this one will stay unpatched forever.”

Other patches

Beyond the two Windows zero-days patched this month, there were six vulnerabilities patched that had been publicly disclosed, but no attacks were seen in the wild. The disclosures could potentially aid attackers in exploiting the issues faster, so enterprises should prioritize the following:

  • CVE-2018-15664, a Docker flaw in the Azure Kubernetes Service;
  • CVE-2019-0962, an Azure Automation escalation-of-privilege flaw;
  • CVE-2019-0865, a denial-of-service flaw in SymCrypt;
  • CVE-2019-0887, a remote code execution (RCE) flaw in Remote Desktop Services;
  • CVE-2019-1068, an RCE flaw in Microsoft SQL Server; and
  • CVE-2019-1129, a Windows escalation-of-privilege flaw.

The Patch Tuesday release also included 15 vulnerabilities rated critical by Microsoft. Some standout patches in that group included CVE-2019-0785, a DHCP Server RCE issue, and four RCE issues affecting Microsoft browsers, which Trend Micro labeled as noteworthy — CVE-2019-1004, CVE-2019-1063, CVE-2019-1104 and CVE-2019-1107.

Go to Original Article

PlayerUnknown’s Battlegrounds Full Product Release Now Available on Xbox One – Xbox Wire

Today, the Full Product Release (1.0) update for PlayerUnknown’s Battlegrounds (PUBG) released for new and existing owners across the Xbox One family of devices. This is a big moment for the PUBG Xbox community, now over nine million players strong, who have been every bit an important part of the development process since we first launched in Xbox Game Preview in December 2017. With the support of fans and the team at Microsoft, it’s been an incredible journey and we’re just getting started.

The Full Product Release comes with several exciting updates, including the Xbox One debut of the Sanhok Map, available today, along with Event Pass: Sanhok, which unlocks awesome rewards for leveling up and completing missions. The Sanhok Map is included with the Full Product Release 1.0 update, and Event Pass: Sanhok can be purchased in the Microsoft Store or the PUBG in-game Store beginning today. For additional details on all of the new features included in the Full Product Release update today and in the weeks ahead, click here.

While Full Product Release represents an exciting milestone for PUBG on Xbox One, it does not represent the end of the journey. The game will continue to be updated and optimized, and we have an exciting roadmap of new features and content ahead in the months to come, including the winter release of an all-new snow map.

The Full Product Release of PUBG for Xbox One is available for $29.99 USD digitally and as a retail disc version at participating retailers worldwide. If you already own the Xbox Game Preview version of PUBG on Xbox One you will receive a content update automatically today at no additional cost.

As shared previously, we’re also providing some special bonuses both to new players and those who have supported PUBG over the past nine months.

To enhance the ultimate PUBG experience on Xbox, fans can also look forward to the PlayerUnknown’s Battlegrounds Limited Edition Xbox Wireless Controller, which is now available for pre-order at the online at the Microsoft Store and starts shipping to retailers worldwide on October 30 for $69.99 USD.

Be sure to tune into Mixer’s very own HypeZone PUBG Channel to catch the most exciting, down-to-the-wire PUBG action that give viewers the opportunity to discover streamers of all levels during the most intense moments of the game.

Whether you’re already a player or your chicken dinner hunt starts today – now is the best time to jump into PUBG on Xbox One!

Alteryx 2018.3 gives users new data visualization options

The general release of Alteryx 2018.3 is now available, bringing with it more data visualization tools in an effort by Alteryx Inc. to give users of the data preparation and analytics platform a broader set of visualization capabilities.

The quarterly update became generally available on Aug. 28. It also adds other new functionality to the Alteryx Analytics platform, including an analytics caching feature and a Python tool that will allow developers to write to Jupyter Notebook. In addition, Alteryx 2018.3 offers faster performance, more server management options and increased support for the Spark processing engine.

For users, the highlights of the new release are likely to be the additional visualization tools and the caching capability. The need for better data visualization is particularly acute. While Gartner ranked Alteryx among the leading vendors in its 2018 Magic Quadrant report on data science and machine learning platforms, it faulted the company for reporting and visualization capabilities that “remain comparatively weak.”

An increased focus on visualization

Alteryx 2018.3 clearly aims to address the visualization gap by expanding an embedded collection of tools called Visualytics, which Alteryx introduced last August.

Putting in visuals that allow customers to explore data sets has particular positive connotations to us.
Ryan Peelerdirector of network analytics at Voxx Analytics

Following user requests for more, Alteryx has added a tool for building and sharing interactive charts and graphs that resulted from a 2017 partnership deal with visualization vendor Plotly. Alteryx users can now also combine multiple interactive charts together and share them with other users for collaborative analysis, said Greg Davoll, vice president of product marketing at the vendor, based in Irvine, Calif.

Meanwhile, the new caching tool enables users to create caching points in the analytics workflow process. If the process is stopped, it will be restarted from the caching point, without the need to completely start over. That can help reduce processing times, as it has done for Alteryx user Voxx Analytics.

A Garden Grove, Calif., company that provides what it calls influence analytics services to help companies hone their marketing outreach efforts, particularly in the pharmaceuticals and life sciences industries, Voxx  was an early adopter of Alteryx 2018.3 as part of the beta program for the new release.

Ryan Peeler, director of network analytics at Voxx, said his team uses the Alteryx software to automate much of the name disambiguation process in analyzing data from social networks. Peeler added that the new caching tool has already saved him “a ton of time” on analytics processing jobs.

“Once I’ve pulled data once for a use case, I don’t need to keep downloading it,” he said. All that data now gets cached, “so the next time I run it, it picks up right where I left off.”

Still room for more improvement

The Visualytics enhancements are also of interest to Peeler, who said they have made it easier for him to create data visualizations for Voxx customers. Still, while he likes where Alteryx has gone with Visualytics thus far, he noted that if he could make a change to Alteryx’s software, it would be to further the platform’s visualization capabilities even more.

For example, Peeler said he would like to be able to export data visualizations from Alteryx to other analytics and reporting platforms, so they could be shared with corporate clients more easily. “Putting in visuals that allow customers to explore data sets has particular positive connotations to us,” he said.

Donald Farmer, principal of analytics consulting firm TreeHive Strategy, said the Visualytics components of Alteryx 2018.3 are notable enhancements.

“Visualytics recognizes what is too often overlooked in the data analysis user experience: that data preparation and analysis are two sides of the same coin,” he said. “These are really not separate processes. You prepare data with an analysis in mind, and as you develop the visualization or interpretation, you discover ways in which the data must be further prepared or refined to improve the analysis.”

The integrated capabilities provided by Alteryx are particularly useful for “data artisans, who are working hands-on with the data and not visualizing at the end of some other process,” Farmer continued. He also described the caching feature as “a significant enhancement for advanced users,” saying it will help ease the hassles of developing complex data flows.

Pricing in question

However, Farmer negatively noted the Alteryx platform’s pricing model, which charges users of the Alteryx Designer desktop tool an extra $6,500 per year for a feature that allows them to schedule analytics workflows and automate the generation of reports. That’s on top of the $5,195 annual base cost per user for the Designer software.

“In the 21st century, that’s like selling a car with a hand crank and charging extra for an electric starter,” Farmer said.

As for what users might expect beyond the Alteryx 2018.3 update, Davoll said to look for more automation and smart analytics capabilities. While not announced yet, the 2018.4 release will likely become available to beta users in the next couple weeks, he added.

On the data preparation side, vendors that Alteryx competes with include Datawatch, Paxata and Trifacta. In addition, self-service BI vendor Tableau, whose software is often complemented by Alteryx’s technology in user deployments, released its own Tableau Prep tool last spring, enabling  users to do at least some basic data preparation tasks directly in their Tableau systems.

According to Farmer, Alteryx 2018.3 could be seen partly as a response to Tableau Prep that’s designed to raise Alteryx’s analytics and data visualization profile with users. Although, he said the Tableau tool “has been less impactful on Alteryx than many expected.”

Polycom VVX series adds four new desk phones

Polycom has expanded its VoIP endpoint portfolio with the release of four new open SIP phones. The vendor also launched a new cloud-based device management service to help partners provision and troubleshoot Polycom devices.

The release builds upon the Polycom VVX series of IP desk phones. The more advanced models include color LCD displays and gigabit Ethernet ports, unlike any of the previous phones in the Polycom VVX series.

The VVX 150 is the most basic of the new devices. Designed for home offices or common areas, the VVX 150 supports two lines and does not have a USB port or a color display.

The VVX 250 is targeted at small and midsize businesses, with a 2.8-inch color LCD display, HD audio, one USB port and support for up to four lines.

The VVX 350 is for cubicle workers, call centers and small businesses. It has a 3.5-inch color LCD display, two USB ports and support for six lines.

The most advanced of the four new models, the VVX 450, can host 12 lines and comes with a 4.3-inch color LCD display. Polycom said the phones are meant for front-line staff in small and midsize businesses.

The new phones rely on the same unified communications software as the rest of the Polycom VVX series, which should simplify the certification process for service providers, Polycom said. 8×8, Nextiva and The Voice Factory were the first voice providers to certify the devices.

Unlike traditional propriety phones, open SIP phones can connect to the IP telephony services of a wide range of vendors. This simplifies interoperability for businesses that get UC services from multiple vendors.

Polycom embraces cloud to help sell hardware

Polycom has launched two new cloud services in an attempt to make its hardware more attractive to enterprises and service providers.

Polycom Device Management Service for Service Providers, released this week, gives partners a web-based application for managing Polycom devices. This should help service providers improve uptimes and enhance end-user control panels. Polycom launched a similar service for enterprises earlier this year.

Polycom’s new cloud offering aligns well with the cloud management platform for headsets offered by Plantronics, which acquired Polycom in a $2 billion deal that closed last month. Polycom first announced the cloud services in May, prior to the acquisition being made final.

Eventually, Plantronics may look to combine its cloud management platform with Polycom’s, allowing partners to control phones and headsets from the same application, said Irwin Lazar, analyst at Nemertes Research, based in Mokena, Ill. This would give Plantronics and Polycom an advantage over competitors such as Yealink and AudioCodes.

“The endpoint market is fairly competitive, so wrapping management capabilities around the devices is an attractive means to provide a differentiated offering,” Lazar said.

Bringing Device Support to Windows Server Containers

When we introduced containers to Windows with the release of Windows Server 2016, our primary goal was to support traditional server-oriented applications and workloads. As time has gone on, we’ve heard feedback from our users about how certain workloads need access to peripheral devices—a problem when you try to wrap those workloads in a container. We’re introducing support for select host device access from Windows Server containers, beginning in Insider Build 17735 (see table below).

We’ve contributed these changes back to the Open Containers Initiative (OCI) specification for Windows. We will be submitting changes to Docker to enable this functionality soon. Watch the video below for a simple example of this work in action (hint: maximize the video).

What’s Happening

To provide a simple demonstration of the workflow, we have a simple client application that listens on a COM port and reports incoming integer values (powershell console on the right). We did not have any devices on hand to speak over physical COM, so we ran the application inside of a VM and assigned the VM’s virtual COM port to the container. To mimic a COM device, an application was created to generate random integer values and send it over a named pipe to the VM’s virtual COM port (this is the powershell console on the left).

As we see in the video at the beginning, if we do not assign COM ports to our container, when the application runs in the container and tries to open a handle to the COM port, it fails with an IOException (because as far as the container knew, the COM port didn’t exist!). On our second run of the container, we assign the COM port to the container and the application successfully gets and prints out the incoming random ints generated by our app running on the host.

How It Works

Let’s look at how it will work in Docker. From a shell, a user will type:

docker run --device="/"

For example, if you wanted to pass a COM port to your container:

docker run --device="class/86E0D1E0-8089-11D0-9CE4-08003E301F73" mcr.microsoft.com/windowsservercore-insider:latest

The value we’re passing to the device argument is simple: it looks for an IdType and an Id. For this coming release of Windows , we only support an IdType of “class”. For Id, this is  a device interface class GUID. The values are delimited by a slash, “/”.  Whereas  in Linux a user assigns individual devices by specifying a file path in the “/dev/” namespace, in Windows we’re adding support for a user to specify an interface class, and all devices which identify as implementing this class   will be plumbed into the container.

If a user wants to specify multiple classes to assign to a container:

docker run --device="class/86E0D1E0-8089-11D0-9CE4-08003E301F73" --device="class/DCDE6AF9-6610-4285-828F-CAAF78C424CC" --device="…" mcr.microsoft.com/windowsservercore-insider:latest

What are the Limitations?

Process isolation only: We only support passing devices to containers running in process isolation; Hyper-V isolation is not supported, nor do we support host device access for Linux Containers on Windows (LCOW).

We support a distinct list of devices: In this release, we targeted enabling a specific set of features and a specific set of host device classes. We’re starting with simple buses. The complete list that we currently support  is  below.

Device Type Interface Class  GUID
GPIO 916EF1CB-8426-468D-A6F7-9AE8076881B3
I2C Bus A11EE3C6-8421-4202-A3E7-B91FF90188E4
COM Port 86E0D1E0-8089-11D0-9CE4-08003E301F73
SPI Bus DCDE6AF9-6610-4285-828F-CAAF78C424CC

Stay tuned for a Part 2 of this blog that explores the architectural decisions we chose to make in Windows to add this support.

What’s Next?

We’re eager to get your feedback. What specific devices are most interesting for you and what workload would you hope to accomplish with them? Are there other ways you’d like to be able to access devices in containers? Leave a comment below or feel free to tweet at me.


Craig Wilhite (@CraigWilhite)

Windstream SD-WAN gets help connecting to the cloud

Network service provider Windstream Communications plans to release in August a service for connecting the Windstream SD-WAN to applications running on Microsoft Azure. The product, called SD-WAN Cloud Connect, is designed to provide a reliable connection to public clouds.

Windstream introduced the service in July, with initial support limited to Amazon Web Services. Windstream plans to add support for other cloud providers over time.

Connecting corporate employees to application services running in a public cloud is not a trivial matter. Corporate IT has to know the performance requirements of cloud-based applications and the expected usage patterns to estimate network bandwidth capacity. Engineers also have to identify potential bottlenecks and plan for monitoring network traffic and network connection endpoints after deploying applications in the cloud.

Windstream’s virtual edge device

Windstream’s latest Cloud Connect service is designed to eliminate some of the hassles of connecting to the public cloud. The service connects through a virtual edge device that communicates with the Windstream SD-WAN Concierge offering, which is a premise-based version of VMware’s VeloCloud.

Windstream can deploy the edge device in its data center or on a customer’s virtualized server. After installing the software, Windstream activates it and handles all management chores as part of the customer’s Windstream SD-WAN service.

Windstream provides an online portal for creating, deploying and managing SD-WAN routing and security policies. The site includes a console for accessing real-time intelligence on link performance.

Windstream’s partnership with an SD-WAN vendor is not unique. Many service providers have announced such deals to compete for a share of the fast-growing market. Other alliances include Comcast Business and CenturyLink with Versa Networks; Verizon with Viptela, which is owned by Cisco; and AT&T and Sprint with VeloCloud.

Windstream, which serves mostly small and midsize enterprises, has grown its network service business through acquisition. In January, Windstream announced it would acquire Mass Communications, a New York-based competitive local exchange carrier. In 2017, Windstream completed the acquisitions of Broadview and EarthLink.