Tag Archives: analytics

Logi Analytics unveils new platform for developers

Logi Analytics unveiled Logi Composer on Wednesday, marking the launch of a new platform for developers to create embedded analytics applications.

Logi, founded in 2000 and based in McLean, Va., specializes in providing tools for developers to build applications. CEO Steven Schneider said that the release of Logi Composer, which is now generally available, is the culmination of more than seven years of research and development and is the most significant new product in the vendor’s history.

The new platform doesn’t replace existing Logi products such as Info and can be used in concert with Logi’s other analytics products.

“This is our most significant release since our founding,” Schneider said. “All of our other product releases are essentially add-ons to Logi Info to extend its capabilities. Composer is the first product that we took with the experience of having done [embedded application software development] for 17 years and built from the ground up to optimize around this use case for applications.”

Logi Composer has a cloud-ready, microservices architecture and comes with data connectors that help users access their data and speed up queries. Meanwhile, using the platform, developers can customize and embed data visualizations and embed self-service features, which can be tailored to the skill level of the end users.

And beyond its technological capabilities, the platform was designed with both ease of use and customization in mind, according to Schneider.

A sample dashboard from Logi Analytics displays an organization's event sales data.
An organization’s event sales data is displayed on a sample Logi Analytics dashboard.

“We believe we cracked the code between ease of use — an application you get into production and make it easy to use and create new content — and ultimate control,” Schneider said. “There are a lot of products out there that are pretty easy to embed and pretty easy to deliver, but when you really want to customize the end-user experience down to the nth degree, it’s really hard to do that.”

Key to its capabilities, Schneider added, is that Logi Composer was built independently of other Logi analytics tools and not as an add-on. In addition, he said, Logi’s 2019 acquisition of Zoomdata and partnership with ERP vendor QAD played roles in the development of the final product.

Analysts, meanwhile, view the new platform as a significant tool for embedded application developers.

“I think this announcement is a doubling down of sorts for the company,” said Mike Leone, senior analyst at Enterprise Strategy Group. “Logi has had a fantastic presence with developers already and this announcement is enabling them to empower even more developers to build faster, iterate faster, integrate more, and the result will be an improved level of time-to-value.”

Similarly, Doug Henschen, principal analyst at Constellation Research, noted that the release of Logi Composer is a substantial move for the analytics vendor.

Composer is the first product that we took with the experience of having done [embedded application software development] for 17 years and built from the ground up to optimize around this use case for applications.
Steven SchneiderCEO, Logi Analytics

“Logi is clearly upping its game on developer-oriented microservices, APIs and support for containerized deployment,” he said. “They’re also taking advantage of query engine and performance-enhancing back-end improvements leveraged from last year’s Zoomdata acquisition. The announcement brings progress on multiple fronts.”

While Logi has specialized in providing a platform for developers, it isn’t the only analytics software vendor equipping developers with the tools they need to create applications.

Both Salesforce and Microsoft have tools geared toward low-code/no-code application development. And among vendors specializing in analytics, Looker, Sisense and Yellowfin all recently introduced new tools for developers.

Logi, however, remains one of the analytics vendors most focused on tools for developers.

“Logi Analytics was early to specialize in embedded capabilities aimed at developers, so it has a great deal of experience in catering to this market, but it does face increased competition,” Henschen said. “It’s an important announcement for Logi, but it’s not going to put it way out ahead of competitors.”

Leone, however, posited that the release of Logi Composer, with its focus on development teams rather than end users, differentiates Logi’s platform.

“I would argue that Logi is one of the better BI players when it comes to being developer-centric,” he said. “I think this announcement keeps them there and if anything, separates them a bit more. While virtually all players in this space are focused on the developers … a majority are coming from a business analyst point of view. Logi’s existing presence within development teams enables them to expand that presence.”

Schneider, meanwhile, said he recognizes that other vendors are adding tools for developers to their platforms but sees a significant difference in the way Logi is going about it.

“This is all we do,” he said, adding that when competing for business he rarely sees potential customers looking at BI vendors for their embedded application development needs.

With Logi Composer now on the market after seven years in the pipeline, self-service analytics and the needs of DevOps teams will remain a focus for the vendor with product development focused in those areas.

Adding more capabilities related to monitoring and scale is prominent on the vendor’s roadmap, Schneider said, as is ease of use.

“We have a pretty aggressive list of things we want to add to the product,” he said.

Go to Original Article

Microsoft integration adds more AI to Tibco analytics suite

Tibco continues to add to the augmented intelligence capabilities of its analytics platform, most recently revealing that Tibco Spotfire and Tibco Data Science now support Microsoft Azure Cognitive Services.

Azure Cognitive Services is a system from Microsoft that enables application developers to embed AI and machine learning capabilities. Spotfire, meanwhile, is Tibco’s chief business intelligence tool for data visualizations and Data Science is a BI tool focused less on visualization and more on hard core data analysis, and the two can be used together or independently of one another.

Tibco, founded in 1997 and based in Palo Alto, Calif., is adding support for Azure Cognitive Services following other AI investments in its analytics platform. In January 2020, the vendor added to the natural language generation capabilities of Spotfire via an integration with Arria NLG Studio for BI, and in the fall of 2019 it unveiled new products and added to existing ones with the credo of AI everywhere.

Meanwhile, the vendor’s addition of native support for Azure Cognitive Services, revealed June 2, comes after Tibco expanded the multi-cloud capabilities of its analytics platform through an integration with Microsoft Azure late in 2019; it already had an integration with Amazon Web Services and supports Google Cloud, among other cloud service providers.

“We don’t believe that AI is a marketing tool or a marketing term,” said Matt Quinn, Tibco’s COO. “We see that AI can actually be used as foundational element in people’s systems, and so working with Microsoft, doing this integration, is all about us being able to use our own technology, inside of our own products, as a foundational layer.”

A sample Tibco dashboard displays an organization's data.
An organization’s data is displayed on a sample Tibco dashboard.

AI, meanwhile, is an area where Tibco should be focused, according to Rick Sherman, founder and managing partner of Athena IT Solutions.

“With Spotfire, AI is definitely where they should be,” he said. “AI, machine learning and data science is where they’re great. They’re geared to sophisticated users, and if you’re doing a deeper dive, doing serious visualizations, Tibco is a way you want to go.”

Beyond simply adding a new integration, Tibco’s move to enable application developers to embed AI and machine learning capabilities by using Azure Cognitive Services continues the vendor’s process of expanding its analytics platform.

While some longtime BI vendors have struggled to maintain an innovative platform, Tibco, after losing some momentum in the early 2000s, has been able to remain among the top vendors with a suite of BI tools that are considered innovative.

We see that AI can actually be used as foundational element in people’s systems, and so working with Microsoft, doing this integration, is all about us being able to use our own technology, inside of our own products, as a foundational layer.
Matt QuinnCOO, Tibco

Tibco’s platform is entirely cloud-based, which allows Tibco to deliver new and upgraded features without having to roll out a major update each time, and its partnership strategy gives it the ability to embed products such as Azure Cognitive Services and Arria NLG Studio for BI without having to develop them in-house.

“Tibco has really evolved into a much more partner-centric company,” Quinn said. “We realize we are part of a broader ecosystem of tools and technologies, and so these partnerships that we’ve created are pretty special and pretty important, and we’ve been really happy with the bidirectional of those, especially the relationship with Microsoft. It’s clear that they have evolved as we have evolved.”

As far as motivation for the addition of Azure Cognitive Services to the Tibco analytics platform, Quinn said it’s simply about making data scientists more productive.

Customers, he added, were asking for the integration, while Tibco had a preexisting relationship with Microsoft that made adding Azure Cognitive Services a natural fit.

“Data scientists use all sorts of tools from all different walks of life, and because of our integration heritage we’re really good at integrating those types of things, so what we’re doing is we’re opening up the universe of all the Microsoft pieces to this data science group that just wants to be more productive,” Quinn said. “It enhances the richness of the platform.”

Similarly, Sherman said that the new integration is a positive move for data scientists.

Tibco’s acquisitions in recent years, such as its 2018 purchase of Scribe Software and its 2019 purchase of SnappyData, helped advance the capabilities of Tibco’s analytics platform, and now integrations are giving it further powers.

“They’re doing some excellent things,” Sherman said. “They’re aiming at deeper analytics, digging deeper into data science and data engineering, and this move to get their analytics closer to their data science makes a heck of a lot of sense.”

In the coming months, Quinn said that Tibco plans to continue adding integrations in order to add to the capabilities of its analytics platform. In addition, ease of use will be a significant focus for the vendor.

Meanwhile, ModelOps — the lifecycle of model development — will be a new area of emphasis for Tibco.

“ModelOps is really the set of things you have to do to take a model, whether it’s AI or just plain data science, and move it into the real world, and then how do you change it, how do you evolve it, who needs to sign off on it,” Quinn said. “For Tibco it’s great because it really brings together the data science piece with the hardcore engineering stuff that people have known us for.”

Go to Original Article

U.K. agency uses Oracle analytics to ID annual savings

A top United Kingdom government agency turned to the Oracle analytics platform in pursuit of a billion pounds sterling ($1.23 billion USD) in annual cost savings.

The National Health Service Business Services Authority (NHSBSA), part of the U.K.’s Department of Health with a focus on supporting the government-run National Health System, manages 36 billion pounds of the NHS’ spending each year.

In 2014, the agency was told to identify and deliver the recurring savings by 2018. Analytics would be key in carrying out the mandate, said Andrew Mason, the NHSBSA’s data warehouse and BI manager, appearing Thursday at a session of the virtually held Oracle Analytics Summit. Oracle is spacing out the event, which started May 12, until August 18.

“It was this really that kick-started the data, analytics and insight area of the NHSBSA,” Mason said. “We set up a data science team. We quickly identified 100 million pounds worth of savings in the first six months, and that got us the buy-in [from the NHS] we needed to carry on our analytics journey.”

After building a data and analytics team, the NHSBSA considered various analytics platforms, including Informatica and even building its own. Ultimately, with its data coming in from both internal and external sources as well as on premises and from the cloud, the agency chose the Oracle analytics suite with Oracle Exadata as its database platform.

“At that moment in time — and probably still is — [we believe] it’s the best hardware you can buy for this kind of analytics,” Mason said, referring to the Exadata platform’s servers. “It was secure, it was stable, and we had lots of support from Oracle helping us get set up.”

Two years later, in 2016, the NHSBSA expanded its analytics capabilities to make data the driver behind its decisions. It began developing machine learning algorithms, building up its enterprise data warehouse and further developing its business intelligence capabilities by adding Oracle Analytics Cloud.

That allowed the agency to make more sense of its transactional data, evolve its internal management information and improve its products and services, all of which lead to cost efficiencies.

Now, using the Oracle analytics platform, the NHSBSA is able to bring all of its raw data into a landing area, move it into a staging area where it gets transformed, visualize the data and finally use it to create aggregate tables and prebuilt reports. And according to Mason, the agency’s analytics stack from Oracle can load 1.5 million database transactions in 45 minutes, stage the data in another 90 minutes, and move it to the final stage for analysis in 12 more minutes.

By 2019, the NHSBSA had reached the target of identifying savings of a billion pounds, and data analytics supported that to the tune of over 800 million pounds. But we didn’t stop there. We continue to push ourselves in the data and analytics arena.
Andrew MasonData warehouse and BI manager, U.K. National Health Service Business Services Administration

Meanwhile, the agency has 4,000 registered users running about 8,000 queries per day, each of which averages just 14 seconds.

“By 2019, the NHSBSA had reached the target of identifying savings of a billion pounds, and data analytics supported that to the tune of over 800 million pounds,” Mason said. “But we didn’t stop there. We continue to push ourselves in the data and analytics arena.”

Among the ways the Oracle analytics platform helped the NHSBSA find ways to reduce spending is to identify fraud and waste.

Mason said that over a billion pounds in U.K. healthcare funding is lost to fraud annually, and “data and analytics can help tackle this.” It can help identify such fraud as incidences of unnecessary prescriptions and activity in the name of deceased patients.

Meanwhile, the NHSBSA’s use of Oracle analytics can help it identify wasted spending on brand-name pharmaceuticals when generic options are available, according to the agency.

Another way the Oracle analytics platform helps the agency reduce costs is to improve prescription behavior, in particular over-prescribing of antibiotics and other drugs. It also can help pinpoint incidences of polypharmacy, the taking of medications concurrently that can sometimes lead to dangerous consequences.

A patient holds a bottle of medication while looking up information on her phone.
A patient looks up information about medication.

“The data scientists did the grunt work,” Mason said. “They came up with the insights, and then we productionized that through our data warehouse and BI dashboards and started pushing that data out to the wider NHS to start improving behaviors as well as putting that data in the hands of policymakers to make more informed decisions.”

Over the past two years, according to Mason, general practitioners in the U.K. had prescribed more than a million fewer antibiotics. Meanwhile, nearly 10,000 fewer patients are on 10 or more medications.

Now, amid the COVID-19 crisis, the NHSBSA is using the Oracle analytics platform to help the U.K. combat the virus.

Its data has helped reduce in-office visits by identifying patients who could benefit from getting prescriptions filled electronically, The data also identifies potential pressures on pharmacists by looking at how many regular customers are over 70 years old. In addition, the system helps administrators discover who should be on the U.K.’s shielded patient list — those at high risk of complications who were advised at the start of the lockdown to stay at home for 12 weeks.

“For our data sets to be used in such important work, I don’t think there’s any bigger compliment,” Mason said. “It makes all the blood, sweat and tears worth it.”

Go to Original Article

Sony Semiconductor Solutions and Microsoft partner to create smart camera solutions for enterprise customers – Stories

Companies collaborate to make video analytics solutions more accessible in order to drive better business outcomes

TOKYO — May 19, 2020 — Sony Semiconductor Solutions (Sony) and Microsoft Corp. (Microsoft) today announced they are partnering to create solutions that make AI-powered smart cameras and video analytics easier to access and deploy for their mutual customers.

Sony logoAs a result of the partnership, the companies will embed Microsoft Azure AI capabilities on Sony’s intelligent vision sensor IMX500, which extracts useful information out of images in smart cameras and other devices. Sony will also create a smart camera managed app powered by Azure IoT and Cognitive Services that complements the IMX500 sensor and expands the range and capability of video analytics opportunities for enterprise customers. The combination of these two solutions will bring together Sony’s cutting-edge imaging & sensing technologies, including the unique functionality of high-speed edge AI processing, with Microsoft’s cloud expertise and AI platform to uncover new video analytics opportunities for customers and partners across a variety of industries.

“By linking Sony’s innovative imaging and sensing technology with Microsoft’s excellent cloud AI services, we will deliver a powerful and convenient platform to the smart camera market. Through this platform, we hope to support the creativity of our partners and contribute to overcoming challenges in various industries,” said Terushi Shimizu, Representative Director and President, Sony Semiconductor Solutions Corporation.

“Video analytics and smart cameras can drive better business insights and outcomes across a wide range of scenarios for businesses,” said Takeshi Numoto, corporate vice president and commercial chief marketing officer at Microsoft. “Through this partnership, we’re combining Microsoft’s expertise in providing trusted, enterprise-grade AI and analytics solutions with Sony’s established leadership in the imaging sensors market to help uncover new opportunities for our mutual customers and partners.”

Video analytics has emerged as a way for enterprise customers across industries to uncover new revenue opportunities, streamline operations and solve challenges. For example, retailers can use smart cameras to detect when to refill products on a shelf or to better understand the optimal number of available open checkout counters according to the queue length. Additionally, a manufacturer might use a smart camera to identify hazards on its manufacturing floor in real time before injuries occur. Traditionally, however, such applications — which rely on gathering data distributed among many smart cameras across different sites like stores, warehouses and distribution centers — struggle to optimize the allocation of compute resources, resulting in cost or power consumption increases.

To address these challenges, Sony and Microsoft will partner to simplify access to computer vision solutions by embedding Azure AI technology from Microsoft into Sony’s intelligent vision sensor IMX500 as well as enabling partners to embed their own AI models. This integration will result in smarter, more advanced cameras for use in enterprise scenarios as well as a more efficient allocation of resources between the edge and the cloud to drive cost and power consumption efficiencies.

Sony’s smart camera managed app powered by Azure is targeted toward independent software vendors (ISVs) specializing in computer vision and video analytics solutions, as well as smart camera original equipment manufacturers (OEMs) aspiring to add value to their hardware offerings. The app will complement the IMX500 sensor and will serve as the foundation on which ISVs and OEMs can train AI models to create their own customer- and industry-specific video analytics and computer vision solutions that address enterprise customer demands. The app will simplify key workflows and take reasonable security measures designed to protect data privacy and security, allowing ISVs to spend less time on routine, low-value integration and provisioning work and more time on creating unique solutions to meet customers’ demands. It will also enable enterprise customers to more easily find, train and deploy AI models for video analytics scenarios.

As part of the partnership, Microsoft and Sony will also work together to facilitate hands-on co-innovation with partners and enterprise customers in the areas of computer vision and video analytics as part of Microsoft’s AI & IoT Insider Labs program. Microsoft’s AI & IoT Insider Labs offer access and facilities to help build, develop, prototype and test customer solutions, working in partnership with Microsoft experts and other solution providers like Sony. The companies will begin working with select customers within these co-innovation centers later this year.

About Microsoft

Microsoft (Nasdaq “MSFT” @microsoft) enables digital transformation for the era of an intelligent cloud and an intelligent edge. Its mission is to empower every person and every organization on the planet to achieve more.

About Sony Semiconductor Solutions

Sony Semiconductor Solutions Corporation is the global leader in image sensors. We strive to provide advanced imaging technologies that bring greater convenience and joy to people’s lives. In addition, we also work to develop and bring to market new kinds of sensing technologies with the aim of offering various solutions that will take the visual and recognition capabilities of both human and machines to greater heights. For more information, please visit: https://www.sony-semicon.co.jp/e/

For more information, press only:


Microsoft Media Relations, WE Communications for Microsoft, (425) 638-7777, [email protected]


Sony Corporate Communications, [email protected]


Go to Original Article
Author: Microsoft News Center

Looker analytics platform adding app development capability

Looker is maintaining a focus on application development as it continues to add new features to its analytics platform six months after its last major release and three months after it finally joined forces with Google Cloud.

The vendor, which was founded in 2012 and is based in Santa Cruz, Calif., was acquired by Google for $2.6 billion in June 2019, just four days before Tableau was purchased by Salesforce for $15.7 billion. Unlike Tableau, however, which serves a largely on-premises customer base and delivers platform updates quarterly, Looker is entirely cloud-based and therefore, beyond its one major update each year, delivers new and upgraded features throughout the year.

Looker 7, released in November 2019, included a new application development framework and enhanced embedded BI capabilities. Since then, Looker has kept adding to its set of tools for application developers, enhancing the power of its no-code query capabilities and providing new ways to embed analytics into the applications customers use in their everyday workflows.

“Developers are their bread and butter,” said Mike Leone, senior analyst at Enterprise Strategy Group. “It’s all about enabling developers to seamlessly, intelligently and rapidly incorporate analytics at scale into modern applications. This is and has been a top priority for Looker.”

Meanwhile, as Looker has continued to build up its analytics platform, the vendor’s acquisition was finalized. The purchase closed so recently, however, that there hasn’t yet been any obvious evidence of collaboration between Looker and Google Cloud, analysts said.

Developers are their bread and butter. It’s all about enabling developers to seamlessly, intelligently and rapidly incorporate analytics at scale into modern applications. This is and has been a top priority for Looker.
Mike LeoneSenior analyst, Enterprise Strategy Group

“I have not seen anything yet to suggest that they’ve made a dramatic change yet in their approach,” said Dave Menninger, research director of data and analytics research at Ventana Research.

He added, however, that Looker and Google Cloud share a lot of similarities and the two are a natural fit. In particular, the way Looker uses its LookML language to enable developers to build applications without having to write complex code fits in with Google Cloud’s focus.

“Looker has found a good partner in Google in the sense that Looker is really targeted at building custom apps,” Menninger said. “Looker is all about the LookML language and constructing these analyses, these displays that are enhanced by the LookML language. And a large part of Google, the Google Cloud Platform division, is really focused on that developer community. So Looker fits into that family well.”

Leone, meanwhile, also said he’s still waiting to see Google’s influence on Looker but added that he expects to hear more about their integration in the near future.

And collaboration, according to Pedro Arellano, Looker’s vice president of product marketing, is indeed on the horizon. The two are working together on new features, and given that Looker is entirely cloud-based and that Looker and Google Cloud not only had a strong partnership before they joined forces but had 350 shared customers, Looker’s integration into the Google Cloud portfolio is proceeding more rapidly than it might have had Looker had a host of on-premises customers.

“It’s exciting to talk with the product teams and understand where the potential integration points are and think about these really exciting thing that we’ll be able to develop, some things that I expect will be out in a relatively short amount of time,” Arellano said. “That work case is happening, and it’s absolutely something we’re doing today.”

As far as features Looker has added to the analytics platform since last fall, one of the key additions is the Slack integration the vendor unveiled at the time Looker 7 was released but was still in beta testing. The tool delivers insights directly into customers’ Slack conversations.

Beyond the Slack integration, Looker has added to its extension network, which is its low-code/no-code tool set for developers. Among the latest new tools are the Data Dictionary, which pulls up metadata about fields built by developers using the LookML model and displays them in a digestible format, as well as tools that help developers customize user interfaces and create dashboard extensions such as adding a chat widget.

In terms of query power, Looker has developed what it calls aggregate awareness, a feature that uses augmented intelligence and machine learning to reduce the amount of time it takes a user to run a query and helps them run more focused queries.

“We really think of Looker as a platform for developing and building and deploying any kind of data experience that our customers might imagine,” Arellano said. “We recognize that we can’t anticipate all the data experiences they might come up with. We’re very focused on the developers because these are the people that are building those experiences.”

In addition to the new features Looker has added since the release of Looker 7, the vendor put together the Looker COVID-19 Data Block, a free hub for data related to the ongoing pandemic that includes data models and links to public sources such Johns Hopkins University, the New York Times and the COVID Tracking Project. The hub uses LookML to power frequent updates and deliver the data in prebuilt dashboards.

“This was an opportunity to do good things with technology and with data,” Arellano said.

As Looker continues to enhance its analytics platform, one of its next areas the vendor says it will focus on will be the platform’s mobile capabilities.

Mobile has long been a difficult medium for BI vendors with data difficult to digest on the small screens of phones and tablets. Many, as a result, have long ignored mobile. Recently, however, vendors such as Yellowfin and MicroStrategy have made significant investments in their mobile capabilities, and Arellano said that Looker plans to offer an improved mobile experience sometime in the second half of 2020.

That fits in with what Leone expects from Looker now that it’s under the Google Cloud umbrella, which is a broadening of the vendor’s focus and capabilities.

“I think, individually, they were behind a few of the leaders in the space, but the Google acquisition almost instantly brought them back on par with direct competition,” he said. “Google’s influence will be beneficial, especially around the ideas of democratizing analytics/insights, faster on-ramp and a much wider vision that incorporates a powerful AI vision.”

Go to Original Article

Oracle Analytics for Cloud HCM gives new capabilities to HR

The just released Oracle Analytics for Cloud HCM platform can do some familiar things for HR. It can, for instance, produce reports about profit and revenue per employee. But it also takes these familiar analytics one step further. HR can use the tool to, for instance, hunt for relationships between employee engagement and revenue data.

Until the release of Oracle Analytics for Cloud HCM, delivering some types of HR and finance data may have required coordination with finance. But the tool now gives HR the ability to run these reports as needed, according to the vendor.

Indeed, Mark Brandau, an analyst at Forrester Research, said the Oracle Analytics for Cloud HCM may give HR managers new ability to run their own analytics. HR will “have additional ways to leverage the people and operational data to make better decisions,” he said.

But HR’s use of analytics broadly “depends on how the solutions are delivering the data and making it consumable every day for people who aren’t familiar with data,” Brandau said.

Oracle Analytics for Cloud HCM, announced via an online conference, is available to its HCM users. It’s part of the vendor’s Oracle Analytics for Applications product line. 

Bruno Aziza, group vice president of Oracle Analytics, said the HCM analytics application stores and makes data available for analytics in an “autonomous data warehouse,” which has security, repair and high availability features that don’t require user intervention. It will have a set of analytical HR modules around employee data, but it will also allow mashups of data from other sources, he said.

Use of unstructured data

Where Aziza believes the analytics application will differentiate itself is in its ability to take unstructured survey data, or something like 360-degree feedback data, and combine it with financial data.

For instance, take a firm analyzing the performance of sales teams across geographies, Aziza said. In this process, HR may use Oracle Analytics for Cloud HCM and discover a performance problem in a specific region.

We couldn’t really see where the blockages were, and we worked closely with the HR teams to make relationships.
Katherine ThompsonHead of reporting and analytics for the Metis Program, Home Office

“Maybe there are some dimensions that explain this lack of performance related to HR components,” Aziza said. It could be a consequence of new employees, dissatisfied employees or engagement issues, he said.

All the major HCM vendors — Oracle, Workday, SuccessFactors, Ultimate Software — have made significant investments in analytics, Forrester’s Brandau said. Part of the drive to analytics is to “help standardize some of the practices and metrics and the way that HR operates,” he said.

At its online launch, Oracle hosted a customer panel that included Katherine Thompson, head of reporting and analytics for the Metis Program at the Home Office in the U.K. The Home Office is responsible for immigration, security and other issues. The Metis Program is the name for a migration to cloud-based ERP using Oracle.

Thompson said the Home Office has been using Oracle’s analytics to identify ways to improve the time it takes to hire someone. It involves different systems, including recruiting and security clearances. “We couldn’t really see where the blockages were, and we worked closely with the HR teams to make relationships,” she said. The Home Office has since sped up the hiring process, she said.

Go to Original Article

Databricks bolsters security for data analytics tool

Some of the biggest challenges with data management and analytics efforts is security.

Databricks, based in San Francisco, is well aware of the data security challenge, and recently updated its Databricks’ Unified Analytics Platform with enhanced security controls to help organizations minimize their data analytics attack surface and reduce risks. Alongside the security enhancements, new administration and automation capabilities make the platform easier to deploy and use, according to the company.

Organizations are embracing cloud-based analytics for the promise of elastic scalability, supporting more end users, and improving data availability, said Mike Leone, a senior analyst at Enterprise Strategy Group. That said, greater scale, more end users and different cloud environments create myriad challenges, with security being one of them, Leone said.

“Our research shows that security is the top disadvantage or drawback to cloud-based analytics today. This is cited by 40% of organizations,” Leone said. “It’s not only smart of Databricks to focus on security, but it’s warranted.”

He added that Databricks is extending foundational security in each environment with consistency across environments and the vendor is making it easy to proactively simplify administration.

As organizations turn to the cloud to enable more end users to access more data, they’re finding that security is fundamentally different across cloud providers.
Mike LeoneSenior analyst, Enterprise Strategy Group

“As organizations turn to the cloud to enable more end users to access more data, they’re finding that security is fundamentally different across cloud providers,” Leone said. “That means it’s more important than ever to ensure security consistency, maintain compliance and provide transparency and control across environments.”

Additionally, Leone said that with its new update, Databricks provides intelligent automation to enable faster ramp-up times and improve productivity across the machine learning lifecycle for all involved personas, including IT, developers, data engineers and data scientists.

Gartner said in its February 2020 Magic Quadrant for Data Science and Machine Learning Platforms that Databricks Unified Analytics Platform has had a relatively low barrier to entry for users with coding backgrounds, but cautioned that “adoption is harder for business analysts and emerging citizen data scientists.”

Bringing Active Directory policies to cloud data management

Data access security is handled differently on-premises compared with how it needs to be handled at scale in the cloud, according to David Meyer, senior vice president of product management at Databricks.

Meyer said the new updates to Databricks enable organizations to more efficiently use their on-premises access control systems, like Microsoft Active Directory, with Databricks in the cloud. A member of an Active Directory group becomes a member of the same policy group with the Databricks platform. Databricks then maps the right policies into the cloud provider as a native cloud identity.

Databricks uses the open source Apache Spark project as a foundational component and provides more capabilities, said Vinay Wagh, director of product at Databricks.

“The idea is, you, as the user, get into our platform, we know who you are, what you can do and what data you’re allowed to touch,” Wagh said. “Then we combine that with our orchestration around how Spark should scale, based on the code you’ve written, and put that into a simple construct.”

Protecting personally identifiable information

Beyond just securing access to data, there is also a need for many organizations to comply with privacy and regulatory compliance policies to protect personally identifiable information (PII).

“In a lot of cases, what we see is customers ingesting terabytes and petabytes of data into the data lake,” Wagh said. “As part of that ingestion, they remove all of the PII data that they can, which is not necessary for analyzing, by either anonymizing or tokenizing data before it lands in the data lake.”

In some cases, though, there is still PII that can get into a data lake. For those cases, Databricks enables administrators to perform queries to selectively identify potential PII data records.

Improving automation and data management at scale

Another key set of enhancements in the Databricks platform update are for automation and data management.

Meyer explained that historically, each of Databricks’ customers had basically one workspace in which they put all their users. That model doesn’t really let organizations isolate different users, however, and has different settings and environments for various groups.

To that end, Databricks now enables customers to have multiple workspaces to better manage and provide capabilities to different groups within the same organization. Going a step further, Databricks now also provides automation for the configuration and management of workspaces.

Delta Lake momentum grows

Looking forward, the most active area within Databricks is with the company’s Delta Lake and data lake efforts.

Delta Lake is an open source project started by Databrick and now hosted at the Linux Foundation. The core goal of the project is to enable an open standard around data lake connectivity.

“Almost every big data platform now has a connector to Delta Lake, and just like Spark is a standard, we’re seeing Delta Lake become a standard and we’re putting a lot of energy into making that happen,” Meyer said.

Other data analytics platforms ranked similarly by Gartner include Alteryx, SAS, Tibco Software, Dataiku and IBM. Databricks’ security features appear to be a differentiator.

Go to Original Article

New AI tools in the works for ThoughtSpot analytics platform

The ThoughtSpot analytics platform only has been available for six years, but since 2014 the vendor has quickly gained a reputation as an innovator in the field of business intelligence software.

ThoughtSpot, founded in 2012 and based in Sunnyvale, Calif., was an early adopter of augmented intelligence and machine learning capabilities, and even as other BI vendors have begun to infuse their products with AI and machine learning, the ThoughtSpot analytics platform has continued to push the pace of innovation.

With its rapid rise, ThoughtSpot attracted plenty of funding, and an initial public offering seemed like the next logical step.

Now, however, ThoughtSpot is facing the same uncertainty as most enterprises as COVID-19 threatens not only people’s health around the world, but also organizations’ ability to effectively go about their business.

In a recent interview, ThoughtSpot CEO Sudheesh Nair discussed all things ThoughtSpot, from the way the coronavirus is affecting the company to the status of an IPO.

In part one of a two-part Q&A, Nair talked about how COVID-19 has changed the firm’s corporate culture in a short time. Here in part two, he discusses upcoming plans for the ThoughtSpot analytics platform and when the vendor might be ready to go public.

One of the main reasons the ThoughtSpot analytics platform has been able to garner respect in a short time is its innovation, particularly with respect to augmented intelligence and machine learning. Along those lines, what is a recent feature ThoughtSpot developed that stands out to you?

ThoughtSpot CEO Sudheesh NairSudheesh Nair

Sudheesh Nair: One of the main changes that is happening in the world of data right now is that the source of data is moving to the cloud. To deliver the AI-based, high-speed innovation on data, ThoughtSpot was really counting on running the data in a high-speed memory database, which is why ThoughtSpot was mostly focused on on-premises customers. One of the major changes that happened in the last year is that delivered what we call Embrace. With Embrace we are able to move to the cloud and leave the data in place. This is critical because as data is moving, the cost of running computations will get higher because computing is very expensive in the cloud.

With ThoughtSpot, what we have done is we are able to deliver this on platforms like Snowflake, Amazon Redshift, Google BigQuery and Microsoft Synapse. So now with all four major cloud vendors fully supported, we have the capability to serve all of our customers and leave all of their data in place. This reduces the cost to operate ThoughtSpot — the value we deliver — and the return on investment will be higher. That’s one major change.

Looking ahead, what are some additions to the ThoughtSpot analytics platform customers can expect?

Nair: If you ask people who know ThoughtSpot — and I know there are a lot of people who don’t know ThoughtSpot, and that’s OK — … if you ask them what we do they will say, ‘search and AI.’ It’s important that we continue to augment on that; however, one thing that we’ve found is that in the modern world we don’t want search to be the first thing that you do. What if search became the second thing you do, and the first thing is that what you’ve been looking for comes to you even before you ask?

What if search became the second thing you do, and the first thing is that what you’ve been looking for comes to you even before you ask?
Sudheesh NairCEO, ThoughtSpot

Let’s say you’re responsible for sales in Boston, and you told the system you’re interested in figuring out sales in Boston — that’s all you did. Now the system understands what it means to you, and then runs multiple models and comes back to you with questions you’ll be interested in, and most importantly with insights it thinks you need to know — it doesn’t send a bunch of notifications that you never read. We want to make sure that the insights we’re sending to you are so relevant and so appropriate that every single one adds value. If one of them doesn’t add value, we want to know so the system can understand what it was that was not valuable and then adjust its algorithms internally. We believe that the right action and insight should be in front of you, and then search can be the second thing you do prompted by the insight we sent to you.

What tools will be part of the ThoughtSpot analytics platform to deliver these kinds of insights?

Nair: There are two features we are delivering around it. One is called Feed, which is inspired by our social media curating insights, and conversations and opinions around facts. Right now social media is all opinion, but imagine a fact-driven social media experience where someone says they had a bad a quarter and someone else says it was great and then data shows up so it doesn’t become an opinion based on another opinion. It’s important that it should be tethered to facts. The second one is Monitor, which is the primary feature where the thing you were looking for shows up even before you ask in the format that you like — could be mobile, could be notifications, could be an image.

Those two features are critical innovations for our growth, and we are very focused on delivering them this year.

The last time we spoke we talked about the possibility of ThoughtSpot going public, and you were pretty open in saying that’s something you foresee. It’s about seven months later, where do plans for going public currently stand?

Nair: If you had asked me before COVID-19 I would have had a bit of a different answer, but the big picture hasn’t changed. I still firmly believe that a company like ThoughtSpot will tremendously benefit from going public because our customers are massive customers, and those customers like to spend more with a public company and the trust that comes with it.

Having said that, I talked last time about building a team and predictability, and I feel seven months later that we have built the executive team that can be the best in class when it comes to public companies. But going public also requires being predictable, and we’re getting in that right spot. I think that the next two quarters will be somewhat fluid, which will maybe set us back when it comes to building a plan to take the company public. But that is basically it. I think taken one by one, we have a good product market, we have good business momentum, we have a good team, and we just need to put together the history that is necessary so that the business is predictable and an investor can appreciate it. That’s what we’re focused on. There might be a short-term setback because of what the coronavirus might throw at us, but it’s going to definitely be a couple of more quarters of work.

Does the decline in the stock market related to COVID-19 play into your plans at all?

Nair: It’s absolutely an important event that’s going on and no one knows how it will play out, but when I think about a company’s future I never think about an IPO as a few quarters event. It’s something we want to do, and a couple of quarters here or there is not going to make a major difference. Over the last couple of weeks, we haven’t seen any softness in the demand for ThoughtSpot, but we know that a lot of our customers’ pipelines are in danger from supply impacts from China, so we will wait and see. We need to be very close to our customers right now, helping them through the process, and in that process we will learn and make the necessary course corrections.

Editor’s note: This interview has been edited for clarity and conciseness.

Go to Original Article

Natural language query tools offer answers within limits

Natural language query has the potential to put analytics in the hands of ordinary business users with no training in data science, but the technology still has a ways to go before it develops into a truly transformational tool.

Natural language query (NLQ) is the capacity to query data by simply asking a question in ordinary language rather than code, either spoken or typed. Ideally, natural language query will empower a business user to do deep analytics without having to code.

That ideal, however, doesn’t exist.

In its current form, natural language query allows someone working at a Ford dealership to ask, “How many blue Mustangs were sold in 2019?” and follow up with, “How many red Mustangs were sold in 2019?” to compare the two.

It allows someone in a clothing store to ask, “What’s the November forecast for sales of winter coats?”

It is not, however, advanced enough to pull together unstructured data sitting in a warehouse, and it’s not advanced enough to do complicated queries and analysis.

Natural language query has the potential to democratize data throughout an organization.
Natural language query enables business users to explore data without having to know code.

“We’ve had voice search, albeit in a limited capacity, for years now,” said Mike Leone, senior analyst at Enterprise Strategy Group (ESG), an IT analysis and research firm in Milford, Mass. “We’re just hitting the point where natural language processing can be effectively used to query data, but we’re not even close to utilizing natural language query for complex querying that traditionally would require extensive back-end work and data science team involvement.”

Similarly, Tony Baer, founder and CEO of the database and analytics advisory firm DBInsight, said that natural language query is not at the point where it allows for deep analysis without the involvement of data scientists.

“You can’t go into a given tool or database and ask any random question,” he said. “It still has to be linked to some structure. We’re not at the point where it’s like talking to a human and the brain can process it. Where we are is that, given guardrails, given some structure to the data and syntax, it’s an alternative to structure a query in a specific way.”

NLQ benefits

At its most basic level, business intelligence improves the decision-making process. And the more people within an organization able to do data-driven analysis, the more informed the decision-making process, not merely at the top of an organization but throughout its workforce.

We’re just hitting the point where natural language processing can be effectively used to query data, but we’re not even close to utilizing natural language query for complex querying.
Mike LeoneSenior analyst, Enterprise Strategy Group

Meanwhile, natural language query doesn’t require significant expertise. It doesn’t force a user to write copious amount of code to come up with an answer to what might be a relatively simple analytical question. It frees business users from having to request the help of data science teams — at least for basic queries. It opens analytics to more users within an organization.

“Good NLQ will help BI power users and untrained business users alike get to insights more quickly, but it’s the business users who need the most help and have the most to gain,” said Doug Henschen, principal analyst at Constellation Research. “These users don’t know how to code SQL and many aren’t even unfamiliar with query constructs such as ‘show me X’ and ‘by Y time period’ and when to ask for pie charts versus bar charts versus line charts.”

“Think of all the people who want to run a report but aren’t able to do so,” echoed Jen Underwood, founder and principal consultant at Impact Analytix, an IT consulting firm in Tampa, Fla. “There’s some true beauty to the search. How many more people would be able to use it because they couldn’t do SQL? It’s simple, and it opens up the ability to do more things.”

In essence, natural language query and other low-code/no-code tools help improve data literacy, and increasing data literacy is a significant push for many organizations.

That said, in its current form it has limits.

“Extending that type of functionality to the business will enable a new demographic of folks to interact with data in a way that is comfortable to them,” Leone said. “But don’t expect a data revolution just because someone can use Alexa to see how many people bought socks on a Tuesday.”

The limitations

Perhaps the biggest hindrance to full-fledged natural language query is the nature of language itself.

Without even delving into the fact that there are more than 5,000 languages worldwide and an estimated 200 to 400 alphabets, individual languages are complicated. There are words that are spelled the same but have different meanings, others that are spelled differently but sound the same, and words that bear no visual or auditory relation to each other but are synonyms.

And within the business world, there are often terms that might mean one thing to one organization and be used differently by another.

Natural language query tools don’t actually understand the spoken or written word. They understand specific code and are programmed to translate a spoken or written query to SQL, and then translate the response from SQL back into the spoken or written word.

“Natural language query has trouble with things like synonyms, and domain-specific terminology — the context is missing,” Underwood said. “You still need humans for synonyms and the terminology a company might have because different companies have different meanings for different words.”

When natural language queries are spoken, accents can cause problems. And whether spoken or written, the slightest misinterpretation by the tool can result in either a useless response or, much worse, something incorrect.

“Accuracy is king when it comes to querying,” ESG’s Leone said. “All it takes is a minor misinterpretation of a voice request to yield an incorrect result.”

Over the next few years, he said, people will come to rely on natural language query to quickly ask a basic question on their devices, but not much more.

“Don’t expect NLQ to replace data science teams,” Leone said. “If anything, NLQ will serve as a way to quickly return a result that could then be used as a launching pad for more complex queries and expert analysis.”

While held back now by the limitations of language, that won’t always be the case. The tools will get more sophisticated, and aided by machine learning, will come to understand a user’s patterns to better comprehend just what they’re asking.

“Most of what’s standing in the way is a lack of experience,” DBInsight’s Baer said. “It’s still early on. Natural language query today is far advanced from where it was two years ago, but there’s still a lot of improvement to be made. I think that improvement will be incremental; machine learning will help.”

Top NLQ tools

Though limited in capability, natural language query tools do save business users significant time when asking basic questions of structured data. And some vendors’ natural language query tools are better than others.

Though one of the top BI vendors following its acquisition of Hyperion in 2007, Oracle lost momentum when data visualizations changed the consumption of analytics. Now that augmented intelligence and machine learning are central tenets of BI, however, Oracle is again pushing the technological capabilities of BI platforms. Oracle Analytics Cloud and Day by Day support voice-based queries and its natural language query works in 28 languages, which Henschen said is the broadest language support available.

“Oracle raised the bar on natural language query a couple of years ago when it released its Day By Day app, which used device-native voice-to-text and introduced explicit thumbs-up/thumbs-down training,” Henschen said.

Another vendor Henschen noted is Qlik, which advanced the natural language capabilities of its platform through its January 2019 acquisition of Crunch Data.

“A key asset was the CrunchBot, since rebranded as the Qlik Insight Bot,” Henschen said.

He added that Qlik Insight Bot is a bot-building feature that works with existing Qlik applications, and the bots can subsequently be embedded in third-party applications, including Salesforce, Slack, Skype and Microsoft Teams.

“It brings NLQ outside of the confines of Qlik Sense and interaction with a BI system,” Henschen said.

Tableau is yet another vendor attempting to ease the analytics process with a natural language processing tool. They introduced Ask Data in February 2019 and Tableau’s September 2019 update included the capability to embed Ask Data in other applications.

“When I think about designing a system and taking it the next step forward, Tableau is doing something. [It remembers if someone ran a similar query] and it gives guidance,” Underwood said. “It has the information and knows what people are asking, and it can surface recommendations.”

Baer similarly mentioned Tableau’s Ask Data, while Leone said that the eventual prevalence of natural language query will ultimately be driven by the Amazon Web Services, Google and Microsoft.

Go to Original Article

Splice Machine 3.0 integrates machine learning capabilities, database

Databases have long been used for transactional and analytics use cases, but they also have practical utility to help enable machine learning capabilities. After all, machine learning is all about deriving insights from data, which is often stored inside a database.

San Francisco-based database vendor Splice Machine is taking an integrated approach to enabling machine learning with its eponymous database. Splice Machine is a distributed SQL relational database management system that includes machine learning capabilities as part of the overall platform.

Splice Machine 3.0 became generally available on March 3, bringing with it updated machine learning capabilities. It also has new Kubernetes cloud native-based model for cloud deployment and enhanced replication features.

In this Q&A, Monte Zweben, co-founder and CEO of Splice Machine, discusses the intersection of machine learning and databases and provides insight into the big changes that have occurred in the data landscape in recent years.

How do you integrate machine learning capabilities with a database?

Monte ZwebenMonte Zweben

Monte Zweben: The data platform itself has tables, rows and schema. The machine learning manager that we have native to the database has notebooks for developing models, Python for manipulating the data, algorithms that allow you to model and model workflow management that allows you to track the metadata on models as they go through their experimentation process. And finally we have in-database deployment.

So as an example, imagine a data scientist working in Splice Machine working in the insurance industry. They have an application for claims processing and they are building out models inside Splice Machine to predict claims fraud. There’s a function in Splice Machine called deploy, and what it will do is take a table and a model to generate database code. The deploy function builds a trigger on the database table that tells the table to call a stored procedure that has the model in it for every new record that comes in the table.

So what does this mean in plain English? Let’s say in the claims table, every time new claims would come in, the system would automatically trigger, grab those claims, run the model that predicts claim cause and outputs those predictions in another table. And now all of a sudden, you have real-time, in-the-moment machine learning that is detecting claim fraud on first notice of loss.

What does distributed SQL mean to you?

Zweben: So at its heart, it’s about sharing data across multiple nodes. That provides you the ability to parallelize computation and gain elastic scalability. That is the most important distributed attribute of Splice Machine.

In our new 3.0 release, we just added distributed replication. It’s another element of distribution where you have secondary Splice Machine instances in geo-replicated areas, to handle failover for disaster recovery.

What’s new in Splice Machine 3.0?

Zweben: We moved our cloud stack for Splice Machines from an old Mesos architecture to Kubernetes. Now our container-based architecture is all Kubernetes, and that has given us the opportunity to enable the separation of storage and compute. You literally can pause Splice Machine clusters and turn them back on. This is a great utility for consumption based usage of databases.

Along with our upgrade to Kubernetes, we also upgraded our machine learning manager from an older notebook technology called Zeppelin to a newer notebook technology that has really gained momentum in the marketplace, as much as Kubernetes has in the DevOps world. Jupyter notebooks have taken off in the data science space.

We’ve also enhanced our workflow management tool called mlflow, which is an open source tool that originated with Databricks and we’re part of that community. Mlflow allows data scientists to track their experiments and has that record of metadata available for governance.

What’s your view on open source and the risk of a big cloud vendor cannibalizing open source database technology?

Zweben: We do compose many different open source projects into a seamless and highly performant integration. Our secret sauce is how we put these things together at a very low level, with transactional integrity, to enable a single integrated system. This composition that we put together is open source, so that all of the pieces of our data platform are available in our open source repository, and people can see the source code right now.

I’m intensely worried about cloud cannibalization. I switched to an AGPL license specifically to protect against cannibalization by cloud vendors.

On the other hand, we believe we’re moving up the stack. If you look at our machine learning package, and how it’s so inextricably linked with the database, and the reference applications that we have in different segments, we’re going to be delivering more and more higher-level application functionality.

What are some of the biggest changes you’ve seen in the data landscape over the seven years you’ve been running Splice Machine?

Zweben: With the first generation of big data, it was all about data lakes, and let’s just get all the data the company has into one repository. Unfortunately, that has proven time and time again, at company after company, to just be data swamps.

Data repositories work, they’re scalable, but they don’t have anyone using the data, and this was a mistake for several reasons.

Instead of thinking about storing the data, companies should think about how to use the data.
Monte ZwebenCo-founder and CEO, Splice Machine

Instead of thinking about storing the data, companies should think about how to use the data. Start with the application and how you are going to make the application leverage new data sources.

The second reason why this was a mistake was organizationally, because the data scientists who know AI were all centralized in one data science group, away from the application. They are not the subject matter experts for the application.

When you focus on the application and retrofit the application to make it smart and inject AI, you can get a multidisciplinary team. You have app developers, architects, subject-matter experts, data engineers and data scientists, all working together on one purpose. That is a radically more effective and productive organizational structure for modernizing applications with AI.

Go to Original Article